title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Builders and image automation
Builders and image automation Red Hat Quay 3 Builders and image automation Red Hat OpenShift Documentation Team
[ "example β”œβ”€β”€ .git β”œβ”€β”€ Dockerfile β”œβ”€β”€ file └── subdir └── Dockerfile", "[alt_names] <quayregistry-name>-quay-builder-<namespace>.<domain-name>:443", "[alt_names] example-registry-quay-builder-quay-enterprise.apps.cluster-new.gcp.quaydev.org:443", "oc new-project bare-metal-builder", "oc create sa -n bare-metal-builder quay-builder", "oc policy add-role-to-user -n bare-metal-builder edit system:serviceaccount:bare-metal-builder:quay-builder", "create token quay-builder -n bare-metal-builder --duration 24h", "oc sa get-token -n bare-metal-builder quay-builder", "oc extract cm/kube-root-ca.crt -n openshift-apiserver", "mv ca.crt build_cluster.crt", "oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq '.secrets[] | select(.name | contains(\"openshift-apiserver-sa-token\"))'.name", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: quay-builder priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny volumes: - '*' allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' defaultAddCapabilities: null fsGroup: type: RunAsAny --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: quay-builder-scc namespace: bare-metal-builder rules: - apiGroups: - security.openshift.io resourceNames: - quay-builder resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: quay-builder-scc namespace: bare-metal-builder subjects: - kind: ServiceAccount name: quay-builder roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: quay-builder-scc", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: USD{BUILDMAN_HOSTNAME}:443 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 10 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 2 REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: <sample_builder_namespace> 3 K8S_API_SERVER: <sample_k8s_api_server> 4 K8S_API_TLS_CA: <sample_crt_file> 5 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G 6 CONTAINER_CPU_LIMITS: 300m 7 CONTAINER_MEMORY_REQUEST: 1G 8 CONTAINER_CPU_REQUEST: 300m 9 NODE_SELECTOR_LABEL_KEY: beta.kubernetes.io/instance-type NODE_SELECTOR_LABEL_VALUE: n1-standard-4 CONTAINER_RUNTIME: podman SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 10 QUAY_USERNAME: <quay_username> QUAY_PASSWORD: <quay_password> WORKER_IMAGE: <registry>/quay-quay-builder WORKER_TAG: <some_tag> BUILDER_VM_CONTAINER_IMAGE: quay.io/quay/quay-builder-qemu-fedoracoreos:latest SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 SSH_AUTHORIZED_KEYS: 11 - <ssh-rsa 12345 [email protected]> - <ssh-rsa 67890 [email protected]> HTTP_PROXY: <http://10.0.0.1:80> HTTPS_PROXY: <http://10.0.0.1:80> NO_PROXY: <hostname.example.com>", "kubectl get -n <namespace> route <quayregistry-name>-quay-builder -o jsonpath={.status.ingress[0].host}", "BUILDMAN_HOSTNAME: <build-manager-hostname> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 600 ORCHESTRATOR: REDIS_HOST: <quay_redis_host REDIS_PASSWORD: <quay_redis_password> REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder", "oc new-project virtual-builders", "oc create sa -n virtual-builders quay-builder", "serviceaccount/quay-builder created", "oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder", "clusterrole.rbac.authorization.k8s.io/edit added: \"system:serviceaccount:virtual-builders:quay-builder\"", "oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder", "clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: \"quay-builder\"", "oc create token quay-builder -n virtual-builders", "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ", "oc get route -n quay-enterprise", "NAME: example-registry-quay-builder HOST/PORT: example-registry-quay-builder-quay-enterprise.apps.stevsmit-cluster-new.gcp.quaydev.org PATH: SERVICES: example-registry-quay-app PORT: grpc TERMINATION: passthrough/Redirect WILDCARD: None", "oc extract cm/kube-root-ca.crt -n openshift-apiserver", "ca.crt", "mv ca.crt build-cluster.crt", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:{producty} # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 5 K8S_API_TLS_CA: <sample_crt_file> 6 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G 7 CONTAINER_CPU_LIMITS: 300m 8 CONTAINER_MEMORY_REQUEST: 1G 9 CONTAINER_CPU_REQUEST: 300m 10 NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 11 HTTP_PROXY: <http://10.0.0.1:80> HTTPS_PROXY: <http://10.0.0.1:80> NO_PROXY: <hostname.example.com>", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:{producty} # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build-cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 300m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 300m NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: \"eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ\" HTTP_PROXY: <http://10.0.0.1:80> HTTPS_PROXY: <http://10.0.0.1:80> NO_PROXY: <hostname.example.com>", "[ { \"AllowedHeaders\": [ \"Authorization\" ], \"AllowedMethods\": [ \"GET\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 }, { \"AllowedHeaders\": [ \"Content-Type\", \"x-amz-acl\", \"origin\" ], \"AllowedMethods\": [ \"PUT\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 } ]", "cat gcp_cors.json", "[ { \"origin\": [\"*\"], \"method\": [\"GET\"], \"responseHeader\": [\"Authorization\"], \"maxAgeSeconds\": 3600 }, { \"origin\": [\"*\"], \"method\": [\"PUT\"], \"responseHeader\": [ \"Content-Type\", \"x-goog-acl\", \"origin\"], \"maxAgeSeconds\": 3600 } ]", "gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json", "Updating Completed 1", "gcloud storage buckets describe gs://<bucket_name> --format=\"default(cors)\"", "cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin", "oc get pods -n virtual-builders", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc get pods -n virtual-builders", "No resources found in virtual-builders namespace.", "{ \"commit\": \"1c002dd\", // required \"ref\": \"refs/heads/master\", // required \"default_branch\": \"master\", // required \"commit_info\": { // optional \"url\": \"gitsoftware.com/repository/commits/1234567\", // required \"message\": \"initial commit\", // required \"date\": \"timestamp\", // required \"author\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required }, \"committer\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required } } }", "oc get pods -n virtual-builders", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc get pods -n virtual-builders", "No resources found in virtual-builders namespace.", "EXECUTORS: - EXECUTOR: ec2 DEBUG: true - EXECUTOR: kubernetes DEBUG: true", "oc port-forward <builder_pod> 9999:2222", "ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost", "systemctl status quay-builder", "journalctl -f -u quay-builder" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html-single/builders_and_image_automation/index
4.2. Managing Cluster Nodes
4.2. Managing Cluster Nodes You can perform the following node-management functions through the luci server component of Conga : Make a node leave or join a cluster. Fence a node. Reboot a node. Delete a node. To perform one the functions in the preceding list, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. At the detailed menu for the cluster (below the clusters menu), click Nodes . Clicking Nodes causes the display of nodes in the center of the page and causes the display of an Add a Node element and a Configure element with a list of the nodes already configured in the cluster. At the right of each node listed on the page displayed from the preceding step, click the Choose a task drop-down box. Clicking Choose a task drop-down box reveals the following selections: Have node leave cluster / Have node join cluster , Fence this node , Reboot this node , and Delete . The actions of each function are summarized as follows: Have node leave cluster / Have node join cluster - Have node leave cluster is available when a node has joined of a cluster. Have node join cluster is available when a node has left a cluster. Selecting Have node leave cluster shuts down cluster software and makes the node leave the cluster. Making a node leave a cluster prevents the node from automatically joining the cluster when it is rebooted. Selecting Have node join cluster starts cluster software and makes the node join the cluster. Making a node join a cluster allows the node to automatically join the cluster when it is rebooted. Fence this node - Selecting this action causes the node to be fenced according to how the node is configured to be fenced. Reboot this node - Selecting this action causes the node to be rebooted. Delete - Selecting this action causes the node to be deleted from the cluster configuration. It also stops all cluster services on the node, and deletes the cluster.conf file from /etc/cluster/ . Select one of the functions and click Go . Clicking Go causes a progress page to be displayed. When the action is complete, a page is displayed showing the list of nodes for the cluster.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-admin-manage-nodes-conga-ca
14.2. Modifying BitRot Detection Behavior
14.2. Modifying BitRot Detection Behavior Once the daemon is enabled, you can pause and resume the detection process, check its status, and modify how often or how quickly it runs. gluster volume bitrot VOLNAME scrub ondemand Starts the scrubbing process and the scrubber will start crawling the file system immediately. Ensure to keep the scrubber in 'Active (Idle)' state, where the scrubber is waiting for it's frequency cycle to start scrubbing, for on demand scrubbing to be successful. On demand scrubbing does not work when the scrubber is in 'Paused' state or already running. gluster volume bitrot VOLNAME scrub pause Pauses the scrubbing process on the specified volume. Note that this does not stop the BitRot daemon; it stops the process that cycles through the volume checking files. gluster volume bitrot VOLNAME scrub resume Resumes the scrubbing process on the specified volume. Note that this does not start the BitRot daemon; it restarts the process that cycles through the volume checking files. gluster volume bitrot VOLNAME scrub status This command prints a summary of scrub status on the specified volume, including various configuration details and the location of the bitrot and scrubber error logs for this volume. It also prints details each node scanned for errors, along with identifiers for any corrupted objects located. gluster volume bitrot VOLNAME scrub-throttle rate Because the BitRot daemon scrubs the entire file system, scrubbing can have a severe performance impact. This command changes the rate at which files and objects are verified. Valid rates are lazy , normal , and aggressive . By default, the scrubber process is started in lazy mode. gluster volume bitrot VOLNAME scrub-frequency frequency This command changes how often the scrub operation runs when the BitRot daemon is enabled. Valid options are daily , weekly , biweekly , and monthly .By default, the scrubber process is set to run biweekly .
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch14s02
Chapter 10. Scalability and performance optimization
Chapter 10. Scalability and performance optimization 10.1. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 10.1.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 10.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in the OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. 10.1.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 10.2. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 10.1.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 10.1.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 10.1.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 10.1.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 10.1.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 10.1.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 10.1.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 10.1.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 10.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. 10.1.4. Optimizing storage performance for Microsoft Azure OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. For production Azure clusters and clusters with intensive workloads, the virtual machine operating system disk for control plane machines should be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure and Azure Stack Hub, disk performance is directly dependent on SSD disk sizes. To achieve the throughput supported by a Standard_D8s_v3 virtual machine, or other similar machine types, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low latency and high IOPS and throughput when reading data. Reading data from the cache, which is present either in the VM memory or in the local SSD disk, is much faster than reading from the disk, which is in the blob storage. 10.1.5. Additional resources Configuring the Elasticsearch log store 10.2. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 10.2.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels . You can modify the Ingress Controller deployment by using the information provided in Setting Ingress Controller thread count for threads and Ingress Controller configuration parameters for timeouts, and other tuning configurations in the Ingress Controller specification. 10.2.2. Configuring Ingress Controller liveness, readiness, and startup probes Cluster administrators can configure the timeout values for the kubelet's liveness, readiness, and startup probes for router deployments that are managed by the OpenShift Container Platform Ingress Controller (router). The liveness and readiness probes of the router use the default timeout value of 1 second, which is too brief when networking or runtime performance is severely degraded. Probe timeouts can cause unwanted router restarts that interrupt application connections. The ability to set larger timeout values can reduce the risk of unnecessary and unwanted restarts. You can update the timeoutSeconds value on the livenessProbe , readinessProbe , and startupProbe parameters of the router container. Parameter Description livenessProbe The livenessProbe reports to the kubelet whether a pod is dead and needs to be restarted. readinessProbe The readinessProbe reports whether a pod is healthy or unhealthy. When the readiness probe reports an unhealthy pod, then the kubelet marks the pod as not ready to accept traffic. Subsequently, the endpoints for that pod are marked as not ready, and this status propagates to the kube-proxy. On cloud platforms with a configured load balancer, the kube-proxy communicates to the cloud load-balancer not to send traffic to the node with that pod. startupProbe The startupProbe gives the router pod up to 2 minutes to initialize before the kubelet begins sending the router liveness and readiness probes. This initialization time can prevent routers with many routes or endpoints from prematurely restarting. Important The timeout configuration option is an advanced tuning technique that can be used to work around issues. However, these issues should eventually be diagnosed and possibly a support case or Jira issue opened for any issues that causes probes to time out. The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes: USD oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}' Verification USD oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3 10.2.3. Configuring HAProxy reload interval When you update a route or an endpoint associated with a route, the OpenShift Container Platform router updates the configuration for HAProxy. Then, HAProxy reloads the updated configuration for those changes to take effect. When HAProxy reloads, it generates a new process that handles new connections using the updated configuration. HAProxy keeps the old process running to handle existing connections until those connections are all closed. When old processes have long-lived connections, these processes can accumulate and consume resources. The default minimum HAProxy reload interval is five seconds. You can configure an Ingress Controller using its spec.tuningOptions.reloadInterval field to set a longer minimum reload interval. Warning Setting a large value for the minimum HAProxy reload interval can cause latency in observing updates to routes and their endpoints. To lessen the risk, avoid setting a value larger than the tolerable latency for updates. The maximum value for HAProxy reload interval is 120 seconds. Procedure Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}' 10.3. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, multi-queue, and ethtool settings. OVN-Kubernetes uses Generic Network Virtualization Encapsulation (Geneve) instead of VXLAN as the tunnel protocol. This network can be tuned by using network interface controller (NIC) offloads. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 10.3.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is configured at the time of OpenShift Container Platform installation, and you can also change the cluster's MTU as a Day 2 operation. See "Changing cluster network MTU" for more information. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The OpenShift SDN network plugin overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, this should be set to 1450 . On a jumbo frame ethernet network, this should be set to 8950 . These values should be set automatically by the Cluster Network Operator based on the NIC's configured MTU. Therefore, cluster administrators do not typically update these values. Amazon Web Services (AWS) and bare-metal environments support jumbo frame ethernet networks. This setting will help throughput, especially with transmission control protocol (TCP). Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN network plugin. Other SDN solutions might require the value to be more or less. Additional resources Changing cluster network MTU 10.3.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 10.3.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. 10.3.4. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes network plugin Configuration parameters for the OpenShift SDN network plugin Improving cluster stability in high latency environments using worker latency profiles 10.4. Optimizing CPU usage with mount namespace encapsulation You can optimize CPU usage in OpenShift Container Platform clusters by using mount namespace encapsulation to provide a private namespace for kubelet and CRI-O processes. This reduces the cluster CPU resources used by systemd with no difference in functionality. Important Mount namespace encapsulation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.4.1. Encapsulating mount namespaces Mount namespaces are used to isolate mount points so that processes in different namespaces cannot view each others' files. Encapsulation is the process of moving Kubernetes mount namespaces to an alternative location where they will not be constantly scanned by the host operating system. The host operating system uses systemd to constantly scan all mount namespaces: both the standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current implementation of kubelet and CRI-O both use the top-level namespace for all container runtime and kubelet mount points. However, encapsulating these container-specific mount points in a private namespace reduces systemd overhead with no difference in functionality. Using a separate mount namespace for both CRI-O and kubelet can encapsulate container-specific mounts from any systemd or other host operating system interaction. This ability to potentially achieve major CPU optimization is now available to all OpenShift Container Platform administrators. Encapsulation can also improve security by storing Kubernetes-specific mount points in a location safe from inspection by unprivileged users. The following diagrams illustrate a Kubernetes installation before and after encapsulation. Both scenarios show example containers which have mount propagation settings of bidirectional, host-to-container, and none. Here we see systemd, host operating system processes, kubelet, and the container runtime sharing a single mount namespace. systemd, host operating system processes, kubelet, and the container runtime each have access to and visibility of all mount points. Container 1, configured with bidirectional mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 1, such as /run/a is visible to systemd, host operating system processes, kubelet, container runtime, and other containers with host-to-container or bidirectional mount propagation configured (as in Container 2). Container 2, configured with host-to-container mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 2, such as /run/b , is not visible to any other context. Container 3, configured with no mount propagation, has no visibility of external mount points. A mount originating in Container 3, such as /run/c , is not visible to any other context. The following diagram illustrates the system state after encapsulation. The main systemd process is no longer devoted to unnecessary scanning of Kubernetes-specific mount points. It only monitors systemd-specific and host mount points. The host operating system processes can access only the systemd and host mount points. Using a separate mount namespace for both CRI-O and kubelet completely separates all container-specific mounts away from any systemd or other host operating system interaction whatsoever. The behavior of Container 1 is unchanged, except a mount it creates such as /run/a is no longer visible to systemd or host operating system processes. It is still visible to kubelet, CRI-O, and other containers with host-to-container or bidirectional mount propagation configured (like Container 2). The behavior of Container 2 and Container 3 is unchanged. 10.4.2. Configuring mount namespace encapsulation You can configure mount namespace encapsulation so that a cluster runs with less resource overhead. Note Mount namespace encapsulation is a Technology Preview feature and it is disabled by default. To use it, you must enable the feature manually. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a file called mount_namespace_config.yaml with the following YAML: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service Apply the mount namespace MachineConfig CR by running the following command: USD oc apply -f mount_namespace_config.yaml Example output machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created The MachineConfig CR can take up to 30 minutes to finish being applied in the cluster. You can check the status of the MachineConfig CR by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1 Wait for the MachineConfig CR to be applied successfully across all control plane and worker nodes after running the following command: USD oc wait --for=condition=Updated mcp --all --timeout=30m Example output machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met Verification To verify encapsulation for a cluster host, run the following commands: Open a debug shell to the cluster host: USD oc debug node/<node_name> Open a chroot session: sh-4.4# chroot /host Check the systemd mount namespace: sh-4.4# readlink /proc/1/ns/mnt Example output mnt:[4026531953] Check kubelet mount namespace: sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt Example output mnt:[4026531840] Check the CRI-O mount namespace: sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt Example output mnt:[4026531840] These commands return the mount namespaces associated with systemd, kubelet, and the container runtime. In OpenShift Container Platform, the container runtime is CRI-O. Encapsulation is in effect if systemd is in a different mount namespace to kubelet and CRI-O as in the above example. Encapsulation is not in effect if all three processes are in the same mount namespace. 10.4.3. Inspecting encapsulated namespaces You can inspect Kubernetes-specific mount points in the cluster host operating system for debugging or auditing purposes by using the kubensenter script that is available in Red Hat Enterprise Linux CoreOS (RHCOS). SSH shell sessions to the cluster host are in the default namespace. To inspect Kubernetes-specific mount points in an SSH shell prompt, you need to run the kubensenter script as root. The kubensenter script is aware of the state of the mount encapsulation, and is safe to run even if encapsulation is not enabled. Note oc debug remote shell sessions start inside the Kubernetes namespace by default. You do not need to run kubensenter to inspect mount points when you use oc debug . If the encapsulation feature is not enabled, the kubensenter findmnt and findmnt commands return the same output, regardless of whether they are run in an oc debug session or in an SSH shell prompt. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured SSH access to the cluster host. Procedure Open a remote SSH shell to the cluster host. For example: USD ssh core@<node_name> Run commands using the provided kubensenter script as the root user. To run a single command inside the Kubernetes namespace, provide the command and any arguments to the kubensenter script. For example, to run the findmnt command inside the Kubernetes namespace, run the following command: [core@control-plane-1 ~]USD sudo kubensenter findmnt Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs ... To start a new interactive shell inside the Kubernetes namespace, run the kubensenter script without any arguments: [core@control-plane-1 ~]USD sudo kubensenter Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt 10.4.4. Running additional services in the encapsulated namespace Any monitoring tool that relies on the ability to run in the host operating system and have visibility of mount points created by kubelet, CRI-O, or containers themselves, must enter the container mount namespace to see these mount points. The kubensenter script that is provided with OpenShift Container Platform executes another command inside the Kubernetes mount point and can be used to adapt any existing tools. The kubensenter script is aware of the state of the mount encapsulation feature status, and is safe to run even if encapsulation is not enabled. In that case the script executes the provided command in the default mount namespace. For example, if a systemd service needs to run inside the new Kubernetes mount namespace, edit the service file and use the ExecStart= command line with kubensenter . [Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2 10.4.5. Additional resources What are namespaces Manage containers in namespaces by using nsenter MachineConfig
[ "oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"livenessProbe\":{\"timeoutSeconds\":5},\"readinessProbe\":{\"timeoutSeconds\":5}}]}}}}'", "oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"tuningOptions\":{\"reloadInterval\":\"15s\"}}}'", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service", "oc apply -f mount_namespace_config.yaml", "machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1", "oc wait --for=condition=Updated mcp --all --timeout=30m", "machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# readlink /proc/1/ns/mnt", "mnt:[4026531953]", "sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt", "mnt:[4026531840]", "sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt", "mnt:[4026531840]", "ssh core@<node_name>", "[core@control-plane-1 ~]USD sudo kubensenter findmnt", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs", "[core@control-plane-1 ~]USD sudo kubensenter", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt", "[Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/scalability-and-performance-optimization
Chapter 8. Investigating the RHOSO High Availability services
Chapter 8. Investigating the RHOSO High Availability services You can use the following command to list all the Red Hat OpenStack Services on OpenShift (RHOSO) High Availability Galera, RabbitMQ, and memcached services, which provides their name, their type, whether they are a ClusterIP or LoadBalancer service, and their ports: You can use the following command to investigate the configuration of a service in more detail: Replace <service-name> with the name of the service from the list of services that you want more information about. In the following example the rabbitmq service is being investigated: 8.1. Testing the resilience of the RHOSO High Availability services You can simulate a failure to test how resilient the Red Hat OpenStack Services on OpenShift (RHOSO) High Availability services are to container failures. For example, you can use the following command to delete the rabbitmq-server-1 pod: After you delete the pod, you can use the following command the monitor the rescheduling process of the rabbitmq-server-1 pod: After a few seconds, the rabbitmq-server-1 pod should have the status of Running :
[ "oc get svc |egrep -e \"rabbit|galera|memcache\" memcached ClusterIP None <none> 11211/TCP openstack-cell1-galera ClusterIP None <none> 3306/TCP openstack-galera ClusterIP None <none> 3306/TCP rabbitmq LoadBalancer 172.30.21.129 172.17.0.85 5672:31952/TCP,15672:30111/TCP,15692:30081/TCP rabbitmq-cell1 LoadBalancer 172.30.97.190 172.17.0.86 5672:30043/TCP,15672:30645/TCP,15692:32654/TCP rabbitmq-cell1-nodes ClusterIP None <none> 4369/TCP,25672/TCP rabbitmq-nodes ClusterIP None <none> 4369/TCP,25672/TCP", "oc describe svc/<service-name>", "oc describe svc/rabbitmq Name: rabbitmq Namespace: openstack Labels: app.kubernetes.io/component=rabbitmq app.kubernetes.io/name=rabbitmq app.kubernetes.io/part-of=rabbitmq Annotations: dnsmasq.network.openstack.org/hostname: rabbitmq.openstack.svc metallb.universe.tf/address-pool: internalapi metallb.universe.tf/ip-allocated-from-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 Selector: app.kubernetes.io/name=rabbitmq Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 172.30.21.129 IPs: 172.30.21.129 LoadBalancer Ingress: 172.17.0.85 Port: amqp 5672/TCP TargetPort: 5672/TCP NodePort: amqp 31952/TCP Endpoints: 192.168.16.43:5672,192.168.20.69:5672,192.168.24.53:5672 Port: management 15672/TCP TargetPort: 15672/TCP NodePort: management 30111/TCP Endpoints: 192.168.16.43:15672,192.168.20.69:15672,192.168.24.53:15672 Port: prometheus 15692/TCP TargetPort: 15692/TCP NodePort: prometheus 30081/TCP Endpoints: 192.168.16.43:15692,192.168.20.69:15692,192.168.24.53:15692 Session Affinity: None External Traffic Policy: Cluster Events: <none>", "oc delete pod/rabbitmq-server-1 pod \"rabbitmq-server-1\" deleted", "oc get pods |grep rabbit rabbitmq-cell1-server-0 1/1 Running 0 4h20m rabbitmq-cell1-server-1 1/1 Running 0 4h20m rabbitmq-cell1-server-2 1/1 Running 0 4h20m rabbitmq-server-0 1/1 Running 0 4h20m rabbitmq-server-1 0/1 Init:0/1 0 2s rabbitmq-server-2 1/1 Running 0 4h20m", "oc get pods |grep rabbit rabbitmq-cell1-server-0 1/1 Running 0 4h23m rabbitmq-cell1-server-1 1/1 Running 0 4h23m rabbitmq-cell1-server-2 1/1 Running 0 4h23m rabbitmq-server-0 1/1 Running 0 4h23m rabbitmq-server-1 1/1 Running 0 3m8s rabbitmq-server-2 1/1 Running 0 4h23m" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/monitoring_high_availability_services/proc_investigating-the-rhoso-ha-services_ha-monitoring
24.3.4. Apache HTTP Server or Sendmail Stops Responding During Startup
24.3.4. Apache HTTP Server or Sendmail Stops Responding During Startup If Apache HTTP Server ( httpd ) or Sendmail stops responding during startup, make sure the following line is in the /etc/hosts file:
[ "127.0.0.1 localhost.localdomain localhost" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch24s03s04
7.4. Laser Printers
7.4. Laser Printers An older technology than inkjet, laser printers are another popular alternative to legacy impact printing. Laser printers are known for their high volume output and low cost-per-page. Laser printers are often deployed in enterprises as a workgroup or departmental print center, where performance, durability, and output requirements are a priority. Because laser printers service these needs so readily (and at a reasonable cost-per-page), the technology is widely regarded as the workhorse of enterprise printing. Laser printers share much of the same technologies as photocopiers. Rollers pull a sheet of paper from a paper tray and through a charge roller , which gives the paper an electrostatic charge. At the same time, a printing drum is given the opposite charge. The surface of the drum is then scanned by a laser, discharging the drum's surface and leaving only those points corresponding to the desired text and image with a charge. This charge is then used to force toner to adhere to the drum's surface. The paper and drum are then brought into contact; their differing charges cause the toner to then adhere to the paper. Finally, the paper travels between fusing rollers , which heat the paper and melt the toner, fusing it onto the paper's surface. 7.4.1. Color Laser Printers Color laser printers aim to combine the best features of laser and inkjet technology into a multi-purpose printer package. The technology is based on traditional monochrome laser printing, but uses additional components to create color images and documents. Instead of using black toner only, color laser printers use a CMYK toner combination. The print drum either rotates each color and lays the toner down one color at a time, or lays all four colors down onto a plate and then passes the paper through the drum, transferring the complete image onto the paper. Color laser printers also employ fuser oil along with the heated fusing rolls, which further bonds the color toner to the paper and can give varying degrees of gloss to the finished image. Because of their increased features, color laser printers are typically twice (or several times) as expensive as monochrome laser printers. In calculating the total cost of ownership with respect to printing resources, some administrators may wish to separate monochrome (text) and color (image) functionality to a dedicated monochrome laser printer and a dedicated color laser (or inkjet) printer, respectively.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-printers-types-laser
10.6. Default Modules
10.6. Default Modules The Apache HTTP Server is distributed with a number of modules. By default the following modules are installed and enabled with the httpd package in Red Hat Enterprise Linux 4.5.0;: Additionally, the following modules are available by installing additional packages:
[ "mod_access mod_actions mod_alias mod_asis mod_auth mod_auth_anon mod_auth_dbm mod_auth_digest mod_auth_ldap mod_autoindex mod_cache mod_cern_meta mod_cgi mod_dav mod_dav_fs mod_deflate mod_dir mod_disk_cache mod_env mod_expires mod_ext_filter mod_file_cache mod_headers mod_imap mod_include mod_info mod_ldap mod_log_config mod_logio mod_mem_cache mod_mime mod_mime_magic mod_negotiation mod_proxy mod_proxy_connect mod_proxy_ftp mod_proxy_http mod_rewrite mod_setenvif mod_speling mod_status mod_suexec mod_unique_id mod_userdir mod_usertrack mod_vhost_alias", "mod_auth_kerb mod_auth_mysql mod_auth_pgsql mod_authz_ldap mod_dav_svn mod_jk2 mod_perl mod_python mod_ssl php" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-apache-defaultmods
Scalability and performance
Scalability and performance OpenShift Container Platform 4.18 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/index
Role APIs
Role APIs OpenShift Container Platform 4.15 Reference guide for role APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/role_apis/index
6.4. Displaying Constraints
6.4. Displaying Constraints There are a several commands you can use to display constraints that have been configured. The following command lists all current location, order, and colocation constraints. The following command lists all current location constraints. If resources is specified, location constraints are displayed per resource. This is the default behavior. If nodes is specified, location constraints are displayed per node. If specific resources or nodes are specified, then only information about those resources or nodes is displayed. The following command lists all current ordering constraints. If the --full option is specified, show the internal constraint IDs. The following command lists all current colocation constraints. If the --full option is specified, show the internal constraint IDs. The following command lists the constraints that reference specific resources.
[ "pcs constraint list|show", "pcs constraint location [show resources|nodes [ specific nodes | resources ]] [--full]", "pcs constraint order show [--full]", "pcs constraint colocation show [--full]", "pcs constraint ref resource" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-constraintlist-haar
5.5.5. Deleting a GULM Client-only Member
5.5.5. Deleting a GULM Client-only Member The procedure for deleting a member from a running GULM cluster depends on the type of member to be removed: either a node that functions only as a GULM client (a cluster member capable of running applications, but not eligible to function as a GULM lock server) or a node that functions as a GULM lock server. The procedure in this section describes how to delete a member that functions only as a GULM client. To remove a member that functions as a GULM lock server, refer to Section 5.5.6, "Adding or Deleting a GULM Lock Server Member" . To delete a member functioning only as a GULM client from an existing cluster that is currently in operation, follow these steps: At one of the running nodes (not at a node to be deleted), start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Status Tool tab, under Services , disable or relocate each service that is running on the node to be deleted. Stop the cluster software on the node to be deleted by running the following commands at that node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service lock_gulmd stop service ccsd stop At system-config-cluster (running on a node that is not to be deleted), in the Cluster Configuration Tool tab, delete the member as follows: If necessary, click the triangle icon to expand the Cluster Nodes property. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties ), click the Delete Node button. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion ( Figure 5.8, "Confirm Deleting a Member" ). Figure 5.8. Confirm Deleting a Member At that dialog box, click Yes to confirm deletion. Propagate the updated configuration by clicking the Send to Cluster button. (Propagating the updated configuration automatically saves the configuration.) Stop the cluster software on the remaining running nodes by running the following commands at each node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service lock_gulmd stop service ccsd stop Start cluster software on all remaining cluster nodes by running the following commands in this order: service ccsd start service lock_gulmd start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) At system-config-cluster (running on a node that was not deleted), in the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-del-member-gulm-client-CA
Chapter 4. Registering hosts and setting up host integration
Chapter 4. Registering hosts and setting up host integration You must register hosts that have not been provisioned through Satellite to be able to manage them with Satellite. You can register hosts through Satellite Server or Capsule Server. Note that the entitlement-based subscription model is deprecated and will be removed in a future release. Red Hat recommends that you use the access-based subscription model of Simple Content Access instead. You must also install and configure tools on your hosts, depending on which integration features you want to use. Use the following procedures to install and configure host tools: Section 4.5, "Installing Tracer" Section 4.6, "Installing and configuring Puppet agent during host registration" Section 4.7, "Installing and configuring Puppet agent manually" 4.1. Supported clients in registration Satellite supports the following operating systems and architectures for registration. Supported host operating systems The hosts can use the following operating systems: Red Hat Enterprise Linux 9, 8, 7 Red Hat Enterprise Linux 6 with the ELS Add-On You can register the following hosts for converting to RHEL: CentOS Linux 7 Oracle Linux 7 and 8 Supported host architectures The hosts can use the following architectures: i386 x86_64 s390x ppc_64 4.2. Registration methods You can use the following methods to register hosts to Satellite: Global registration You generate a curl command from Satellite and run this command from an unlimited number of hosts to register them using provisioning templates over the Satellite API. For more information, see Section 4.3, "Registering hosts by using global registration" . By using this method, you can also deploy Satellite SSH keys to hosts during registration to Satellite to enable hosts for remote execution jobs. For more information, see Chapter 12, Configuring and setting up remote jobs . By using this method, you can also configure hosts with Red Hat Insights during registration to Satellite. For more information, see Chapter 9, Monitoring hosts using Red Hat Insights . (Deprecated) Katello CA Consumer You download and install the consumer RPM from satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm on the host and then run subscription-manager . (Deprecated) Bootstrap script You download the bootstrap script from satellite.example.com /pub/bootstrap.py on the host and then run the script. For more information, see Section 4.4, "Registering hosts by using the bootstrap script" . 4.3. Registering hosts by using global registration You can register a host to Satellite by generating a curl command on Satellite and running this command on hosts. This method uses two provisioning templates: Global Registration template and Linux host_init_config default template. That gives you complete control over the host registration process. You can also customize the default templates if you need greater flexibility. For more information, see Section 4.3.4, "Customizing the registration templates" . 4.3.1. Global parameters for registration You can configure the following global parameters by navigating to Configure > Global Parameters : The host_registration_insights parameter is used in the insights snippet. If the parameter is set to true , the registration installs and enables the Red Hat Insights client on the host. If the parameter is set to false , it prevents Satellite and the Red Hat Insights client from uploading Inventory reports to your Red Hat Hybrid Cloud Console. The default value is true . When overriding the parameter value, set the parameter type to boolean . The host_packages parameter is for installing packages on the host. The host_registration_remote_execution parameter is used in the remote_execution_ssh_keys snippet. If it is set to true , the registration enables remote execution on the host. The default value is true . The remote_execution_ssh_keys , remote_execution_ssh_user , remote_execution_create_user , and remote_execution_effective_user_method parameters are used in the remote_execution_ssh_keys snippet. For more details, see the snippet. You can navigate to snippets in the Satellite web UI through Hosts > Templates > Provisioning Templates . 4.3.2. Configuring a host for registration Configure your host for registration to Satellite Server or Capsule Server. You can use a configuration management tool to configure multiple hosts at once. Prerequisites The host must be using a supported operating system. For more information, see Section 4.1, "Supported clients in registration" . The system clock on your Satellite Server and any Capsule Servers must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail. For example, you can use the Chrony suite for timekeeping. Procedure Enable and start a time-synchronization tool on your host. The host must be synchronized with the same NTP server as Satellite Server and any Capsule Servers. On Red Hat Enterprise Linux 7 and later: On Red Hat Enterprise Linux 6: Deploy the SSL CA file on your host so that the host can make a secured registration call. Find where Satellite stores the SSL CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting. Transfer the SSL CA file to your host securely, for example by using scp . Login to your host by using SSH. Copy the certificate to the truststore: Update the truststore: 4.3.3. Registering a host You can register a host by using registration templates and set up various integration features and host tools during the registration process. Prerequisites Your Satellite account has the Register hosts role assigned or a role with equivalent permissions. You must have root privileges on the host that you want to register. You have configured the host for registration. For more information, see Section 4.3.2, "Configuring a host for registration" . An activation key must be available for the host. For more information, see Managing Activation Keys in Managing content . Optional: If you want to register hosts to Red Hat Insights, you must synchronize the rhel-8-for-x86_64-baseos-rpms and rhel-8-for-x86_64-appstream-rpms repositories and make them available in the activation key that you use. This is required to install the insights-client package on hosts. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . This repository is required for the remote execution pull client, Puppet agent, Tracer, and other tools. If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . If your Satellite Server or Capsule Server is behind an HTTP proxy, configure the Subscription Manager on your host to use the HTTP proxy for connection. For more information, see How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy in the Red Hat Knowledgebase . Procedure In the Satellite web UI, navigate to Hosts > Register Host . Enter the details for how you want the registered hosts to be configured. On the General tab, in the Activation Keys field, enter one or more activation keys to assign to hosts. Click Generate to generate a curl command. Run the curl command as root on the host that you want to register. After registration completes, any Ansible roles assigned to a host group you specified when configuring the registration template will run on the host. The registration details that you can specify include the following: On the General tab, in the Capsule field, you can select the Capsule to register hosts through. A Capsule behind a load balancer takes precedence over a Capsule selected in the Satellite web UI as the content source of the host. On the General tab, you can select the Insecure option to make the first call insecure. During this first call, the host downloads the CA file from Satellite. The host will use this CA file to connect to Satellite with all future calls making them secure. Red Hat recommends that you avoid insecure calls. If an attacker, located in the network between Satellite and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key. On the Advanced tab, in the Repositories field, you can list repositories to be added before the registration is performed. You do not have to specify repositories if you provide them in an activation key. On the Advanced tab, in the Token lifetime (hours) field, you can change the validity duration of the JSON Web Token (JWT) that Satellite uses for authentication. The duration of this token defines how long the generated curl command works. Note that Satellite applies the permissions of the user who generates the curl command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else. CLI procedure Use the hammer host-registration generate-command to generate the curl command to register the host. On the host that you want to register, run the curl command as root . For more information, see the Hammer CLI help with hammer host-registration generate-command --help . Ansible procedure Use the redhat.satellite.registration_command module. For more information, see the Ansible module documentation with ansible-doc redhat.satellite.registration_command . API procedure Use the POST /api/registration_commands resource. For more information, see the full API reference at https://satellite.example.com/apidoc/v2.html . 4.3.4. Customizing the registration templates You can customize the registration process by editing the provisioning templates. Note that all default templates in Satellite are locked. If you want to customize the registration templates, you must clone the default templates and edit the clones. Note Red Hat only provides support for the original unedited templates. Customized templates do not receive updates released by Red Hat. The registration process uses the following provisioning templates: The Global Registration template contains steps for registering hosts to Satellite. This template renders when hosts access the /register Satellite API endpoint. The Linux host_init_config default template contains steps for initial configuration of hosts after they are registered. Procedure Navigate to Hosts > Templates > Provisioning Templates . Search for the template you want to edit. In the row of the required template, click Clone . Edit the template as needed. For more information, see Appendix B, Template writing reference . Click Submit . Navigate to Administer > Settings > Provisioning . Change the following settings as needed: Point the Default Global registration template setting to your custom global registration template, Point the Default 'Host initial configuration' template setting to your custom initial configuration template. 4.4. Registering hosts by using the bootstrap script You can use the bootstrap script to automate content registration and Puppet configuration. Important The bootstrap script is a deprecated feature. Deprecated functionality is still included in Satellite and continues to be supported. However, it will be removed in a future release of this product and is not recommended for new deployments. Use Section 4.3, "Registering hosts by using global registration" instead. For the most recent list of major functionality that has been deprecated or removed within Satellite, refer to the Deprecated features section of the Satellite release notes. You can use the bootstrap script to register new hosts, or to migrate existing hosts from RHN, SAM, RHSM, or another Red Hat Satellite instance. The katello-client-bootstrap package is installed by default on Satellite Server's base operating system. The bootstrap.py script is installed in the /var/www/html/pub/ directory to make it available to hosts at satellite.example.com /pub/bootstrap.py . The script includes documentation in the /usr/share/doc/katello-client-bootstrap- version /README.md file. To use the bootstrap script, you must install it on the host. As the script is only required once, and only for the root user, you can place it in /root or /usr/local/sbin and remove it after use. This procedure uses /root . Prerequisites You have a Satellite user with the permissions required to run the bootstrap script. The examples in this procedure specify the admin user. If this is not acceptable to your security policy, create a new role with the minimum permissions required and add it to the user that will run the script. For more information, see Section 4.4.1, "Setting permissions for the bootstrap script" . You have an activation key for your hosts with the Red Hat Satellite Client 6 repository enabled. For information on configuring activation keys, see Managing Activation Keys in Managing content . You have created a host group. For more information about creating host groups, see Section 3.2, "Creating a host group" . Puppet considerations If a host group is associated with a Puppet environment created inside a Production environment, Puppet fails to retrieve the Puppet CA certificate while registering a host from that host group. To create a suitable Puppet environment to be associated with a host group, follow these steps: Manually create a directory: In the Satellite web UI, navigate to Configure > Puppet ENC > Environments . Click Import environment from . The button name includes the FQDN of the internal or external Capsule. Choose the created directory and click Update . Procedure Log in to the host as the root user. Download the script: Make the script executable: Confirm that the script is executable by viewing the help text: On Red Hat Enterprise Linux 8: On other Red Hat Enterprise Linux versions: Enter the bootstrap command with values suitable for your environment. For the --server option, specify the FQDN of Satellite Server or a Capsule Server. For the --location , --organization , and --hostgroup options, use quoted names, not labels, as arguments to the options. For advanced use cases, see Section 4.4.2, "Advanced bootstrap script configuration" . On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: Enter the password of the Satellite user you specified with the --login option. The script sends notices of progress to stdout . When prompted by the script, approve the host's Puppet certificate. In the Satellite web UI, navigate to Infrastructure > Capsules and find the Satellite or Capsule Server you specified with the --server option. From the list in the Actions column, select Certificates . In the Actions column, click Sign to approve the host's Puppet certificate. Return to the host to see the remainder of the bootstrap process completing. In the Satellite web UI, navigate to Hosts > All Hosts and ensure that the host is connected to the correct host group. Optional: After the host registration is complete, remove the script: 4.4.1. Setting permissions for the bootstrap script Use this procedure to configure a Satellite user with the permissions required to run the bootstrap script. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Select an existing user by clicking the required Username . A new pane opens with tabs to modify information about the selected user. Alternatively, create a new user specifically for the purpose of running this script. Click the Roles tab. Select Edit hosts and Viewer from the Roles list. Important The Edit hosts role allows the user to edit and delete hosts as well as being able to add hosts. If this is not acceptable to your security policy, create a new role with the following permissions and assign it to the user: view_organizations view_locations view_domains view_hostgroups view_hosts view_architectures view_ptables view_operatingsystems create_hosts Click Submit . CLI procedure Create a role with the minimum permissions required by the bootstrap script. This example creates a role with the name Bootstrap : Assign the new role to an existing user: Alternatively, you can create a new user and assign this new role to them. For more information on creating users with Hammer, see Managing Users and Roles in Administering Red Hat Satellite . 4.4.2. Advanced bootstrap script configuration This section has more examples for using the bootstrap script to register or migrate a host. Warning These examples specify the admin Satellite user. If this is not acceptable to your security policy, create a new role with the minimum permissions required by the bootstrap script. For more information, see Section 4.4.1, "Setting permissions for the bootstrap script" . 4.4.2.1. Migrating a host from one Satellite to another Satellite Use the script with --force to remove the katello-ca-consumer-* packages from the old Satellite and install the katello-ca-consumer-* packages on the new Satellite. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.2. Migrating a host from Red Hat Network (RHN) or Satellite 5 to Satellite The bootstrap script detects the presence of /etc/syconfig/rhn/systemid and a valid connection to RHN as an indicator that the system is registered to a legacy platform. The script then calls rhn-classic-migrate-to-rhsm to migrate the system from RHN. By default, the script does not delete the system's legacy profile due to auditing reasons. To remove the legacy profile, use --legacy-purge , and use --legacy-login to supply a user account that has appropriate permissions to remove a profile. Enter the user account password when prompted. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.3. Registering a host to Satellite without Puppet By default, the bootstrap script configures the host for content management and configuration management. If you have an existing configuration management system and do not want to install Puppet on the host, use --skip-puppet . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.4. Registering a host to Satellite for content management only To register a system as a content host, and omit the provisioning and configuration management functions, use --skip-foreman . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.5. Changing the method the bootstrap script uses to download the consumer RPM By default, the bootstrap script uses HTTP to download the consumer RPM from http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm . In some environments, you might want to allow HTTPS only between the host and Satellite. Use --download-method to change the download method from HTTP to HTTPS. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.6. Providing the host's IP address to Satellite On hosts with multiple interfaces or multiple IP addresses on one interface, you might need to override the auto-detection of the IP address and provide a specific IP address to Satellite. Use --ip . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.7. Enabling remote execution on the host Use --rex and --rex-user to enable remote execution and add the required SSH keys for the specified user. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.8. Creating a domain for a host during registration To create a host record, the DNS domain of a host needs to exist in Satellite prior to running the script. If the domain does not exist, add it using --add-domain . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.9. Providing an alternative FQDN for the host If the host's host name is not an FQDN, or is not RFC-compliant (containing a character such as an underscore), the script will fail at the host name validation stage. If you cannot update the host to use an FQDN that is accepted by Satellite, you can use the bootstrap script to specify an alternative FQDN. Procedure Set create_new_host_when_facts_are_uploaded and create_new_host_when_report_is_uploaded to false using Hammer: Use --fqdn to specify the FQDN that will be reported to Satellite: On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.5. Installing Tracer Use this procedure to install Tracer on Red Hat Satellite and access Traces. Tracer displays a list of services and applications that are outdated and need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites The host is registered to Red Hat Satellite. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure On the content host, install the katello-host-tools-tracer RPM package: Enter the following command: In the Satellite web UI, navigate to Hosts > All Hosts , then click the required host name. Click the Traces tab to view Traces. If it is not installed, an Enable Traces button initiates a remote execution job that installs the package. 4.6. Installing and configuring Puppet agent during host registration You can install and configure the Puppet agent on the host during registration. A configured Puppet agent is required on the host for Puppet integration with your Satellite. For more information about Puppet, see Managing configurations using Puppet integration . Prerequisites Puppet must be enabled in your Satellite. For more information, see Enabling Puppet Integration with Satellite in Managing configurations using Puppet integration . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . You have an activation key. For more information, see Managing Activation Keys in Managing content . Procedure In the Satellite web UI, navigate to Configure > Global Parameters to add host parameters globally. Alternatively, you can navigate to Configure > Host Groups and edit or create a host group to add host parameters only to a host group. Enable the Puppet agent using a host parameter in global parameters or a host group. Add a host parameter named enable-puppet7 , select the boolean type, and set the value to true . Specify configuration for the Puppet agent using the following host parameters in global parameters or a host group: Add a host parameter named puppet_server , select the string type, and set the value to the hostname of your Puppet server, such as puppet.example.com . Optional: Add a host parameter named puppet_ca_server , select the string type, and set the value to the hostname of your Puppet CA server, such as puppet-ca.example.com . If puppet_ca_server is not set, the Puppet agent will use the same server as puppet_server . Optional: Add a host parameter named puppet_environment , select the string type, and set the value to the Puppet environment you want the host to use. Until the BZ2177730 is resolved, you must use host parameters to specify the Puppet agent configuration even in integrated setups where the Puppet server is a Capsule Server. Navigate to Hosts > Register Host and register your host using an appropriate activation key. For more information, see Registering Hosts in Managing hosts . Navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. 4.7. Installing and configuring Puppet agent manually You can install and configure the Puppet agent on a host manually. A configured Puppet agent is required on the host for Puppet integration with your Satellite. For more information about Puppet, see Managing configurations using Puppet integration . Prerequisites Puppet must be enabled in your Satellite. For more information, see Enabling Puppet Integration with Satellite in Managing configurations using Puppet integration . The host must have a Puppet environment assigned to it. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: 4.8. Running Ansible roles during host registration You can run Ansible roles when you are registering a host to Satellite. Prerequisites The required Ansible roles have been imported from your Capsule to Satellite. For more information, see Importing Ansible roles and variables in Managing configurations using Ansible integration . Procedure Create a host group with Ansible roles. For more information, see Section 3.2, "Creating a host group" . Register the host by using the host group with assigned Ansible roles. For more information, see Section 4.3.3, "Registering a host" .
[ "systemctl enable --now chronyd", "chkconfig --add ntpd chkconfig ntpd on service ntpd start", "cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors", "update-ca-trust", "mkdir /etc/puppetlabs/code/environments/ example_environment", "curl -O http:// satellite.example.com /pub/bootstrap.py", "chmod +x bootstrap.py", "/usr/libexec/platform-python bootstrap.py -h", "./bootstrap.py -h", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"", "./bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"", "rm bootstrap.py", "ROLE='Bootstrap' hammer role create --name \"USDROLE\" hammer filter create --role \"USDROLE\" --permissions view_organizations hammer filter create --role \"USDROLE\" --permissions view_locations hammer filter create --role \"USDROLE\" --permissions view_domains hammer filter create --role \"USDROLE\" --permissions view_hostgroups hammer filter create --role \"USDROLE\" --permissions view_hosts hammer filter create --role \"USDROLE\" --permissions view_architectures hammer filter create --role \"USDROLE\" --permissions view_ptables hammer filter create --role \"USDROLE\" --permissions view_operatingsystems hammer filter create --role \"USDROLE\" --permissions create_hosts", "hammer user add-role --id user_id --role Bootstrap", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet", "/usr/libexec/platform-python bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman", "bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain", "hammer settings set --name create_new_host_when_facts_are_uploaded --value false hammer settings set --name create_new_host_when_report_is_uploaded --value false", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com", "yum install katello-host-tools-tracer", "katello-tracer-upload", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/registering_hosts_to_server_managing-hosts
Installing on Azure
Installing on Azure OpenShift Container Platform 4.12 Installing OpenShift Container Platform on Azure Red Hat OpenShift Documentation Team
[ "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "grep \"release.openshift.io/feature-set\" *", "0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade", "openshift-install create cluster --dir <installation_directory>", "export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2", "export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3", "export CLUSTER_SP_ID=\"<service_principal_id>\" 1", "az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"", "az feature show --namespace Microsoft.Compute --name EncryptionAtHost", "az provider register -n Microsoft.Compute", "az group create --name USDRESOURCEGROUP --location USDLOCATION", "az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true", "az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software", "KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)", "KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)", "az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL", "DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)", "az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get", "DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)", "az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 4.8.2021122100 replicas: 3", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "az identity list --resource-group \"<existing_resource_group>\"", "az group list", "az identity list --resource-group \"<installer_created_resource_group>\"", "az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2", "az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2", "az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv", "az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc create -f storage-class-definition.yaml", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "az identity list --resource-group \"<existing_resource_group>\"", "az group list", "az identity list --resource-group \"<installer_created_resource_group>\"", "az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2", "az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2", "az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv", "az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc create -f storage-class-definition.yaml", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "az identity list --resource-group \"<existing_resource_group>\"", "az group list", "az identity list --resource-group \"<installer_created_resource_group>\"", "az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2", "az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2", "az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv", "az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc create -f storage-class-definition.yaml", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "az identity list --resource-group \"<existing_resource_group>\"", "az group list", "az identity list --resource-group \"<installer_created_resource_group>\"", "az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2", "az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2", "az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv", "az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc create -f storage-class-definition.yaml", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: usgovvirginia resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzureUSGovernmentCloud 19 pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "az identity list --resource-group \"<existing_resource_group>\"", "az group list", "az identity list --resource-group \"<installer_created_resource_group>\"", "az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2", "az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2", "az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv", "az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role 'Contributor' \\// --scope <disk_encryption_set_id> \\ 2", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc create -f storage-class-definition.yaml", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"4.8.2021122100\" } } }", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5", "export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "export INFRA_ID=<infra_id> 1", "export RESOURCE_GROUP=<resource_group> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}", "az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity", "export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`", "export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`", "az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"", "az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"", "az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS", "export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`", "export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url'`", "az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"", "az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"", "az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }", "export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"public-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }", "bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`", "export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }", "export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20", "export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300", "az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_azure/index
Chapter 5. Using Container Storage Interface (CSI)
Chapter 5. Using Container Storage Interface (CSI) 5.1. Configuring CSI volumes The Container Storage Interface (CSI) allows OpenShift Container Platform to consume storage from storage back ends that implement the CSI interface as persistent storage. Note OpenShift Container Platform 4.7 supports version 1.2.0 of the CSI specification . 5.1.1. CSI Architecture CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Container Platform where they run. To use CSI-compatible storage back end in OpenShift Container Platform, the cluster administrator must deploy several components that serve as a bridge between OpenShift Container Platform and the storage driver. The following diagram provides a high-level overview about the components running in pods in the OpenShift Container Platform cluster. It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar. 5.1.1.1. External CSI controllers External CSI Controllers is a deployment that deploys one or more pods with five containers: The snapshotter container watches VolumeSnapshot and VolumeSnapshotContent objects and is responsible for the creation and deletion of VolumeSnapshotContent object. The resizer container is a sidecar container that watches for PersistentVolumeClaim updates and triggers ControllerExpandVolume operations against a CSI endpoint if you request more storage on PersistentVolumeClaim object. An external CSI attacher container translates attach and detach calls from OpenShift Container Platform to respective ControllerPublish and ControllerUnpublish calls to the CSI driver. An external CSI provisioner container that translates provision and delete calls from OpenShift Container Platform to respective CreateVolume and DeleteVolume calls to the CSI driver. A CSI driver container The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod. Note attach , detach , provision , and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node. Note The external attacher must also run for CSI drivers that do not support third-party attach or detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary OpenShift Container Platform attachment API. 5.1.1.2. CSI driver daemon set The CSI driver daemon set runs a pod on every node that allows OpenShift Container Platform to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers: A CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node. A CSI driver. The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OpenShift Container Platform will only use the node plug-in set of CSI calls such as NodePublish / NodeUnpublish and NodeStage / NodeUnstage , if these calls are implemented. 5.1.2. CSI drivers supported by OpenShift Container Platform OpenShift Container Platform installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plug-ins. To create CSI-provisioned persistent volumes that mount to these supported storage assets, OpenShift Container Platform installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator. The following table describes the CSI drivers that are installed with OpenShift Container Platform and which CSI features they support, such as volume snapshots, cloning, and resize. Table 5.1. Supported CSI drivers and features in OpenShift Container Platform CSI driver CSI volume snapshots CSI cloning CSI resize AWS EBS (Tech Preview) βœ… - βœ… Google Cloud Platform (GCP) persistent disk (PD) (Tech Preview) βœ… - βœ… OpenStack Cinder βœ… βœ… βœ… OpenShift Container Storage βœ… βœ… βœ… OpenStack Manila βœ… - - Red Hat Virtualization (oVirt) - - - Important If your CSI driver is not listed in the preceding table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features. 5.1.3. Dynamic provisioning Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OpenShift Container Platform and the parameters available for configuration. The created storage class can be configured to enable dynamic provisioning. Procedure Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver. # oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: <provisioner-name> 2 parameters: EOF 1 The name of the storage class that will be created. 2 The name of the CSI driver that has been installed 5.1.4. Example using the CSI driver The following example installs a default MySQL template without any changes to the template. Prerequisites The CSI driver has been deployed. A storage class has been created for dynamic provisioning. Procedure Create the MySQL template: # oc new-app mysql-persistent Example output --> Deploying template "openshift/mysql-persistent" to project default ... # oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s 5.2. CSI inline ephemeral volumes Container Storage Interface (CSI) inline ephemeral volumes allow you to define a Pod spec that creates inline ephemeral volumes when a pod is deployed and delete them when a pod is destroyed. This feature is only available with supported Container Storage Interface (CSI) drivers. Important CSI inline ephemeral volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 5.2.1. Overview of CSI inline ephemeral volumes Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination. This feature allows you to specify CSI volumes directly in the Pod specification, rather than in a PersistentVolume object. Inline volumes are ephemeral and do not persist across pod restarts. 5.2.1.1. Support limitations By default, OpenShift Container Platform supports CSI inline ephemeral volumes with these limitations: Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. OpenShift Container Platform does not include any CSI drivers. Use the CSI drivers provided by community or storage vendors . Follow the installation instructions provided by the CSI driver. CSI drivers might not have implemented the inline volume functionality, including Ephemeral capacity. For details, see the CSI driver documentation. 5.2.2. Embedding a CSI inline ephemeral volume in the pod specification You can embed a CSI inline ephemeral volume in the Pod specification in OpenShift Container Platform. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods so that the CSI driver handles all phases of volume operations as pods are created and destroyed. Procedure Create the Pod object definition and save it to a file. Embed the CSI inline ephemeral volume in the file. my-csi-app.yaml kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/data" name: my-csi-inline-vol command: [ "sleep", "1000000" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar 1 The name of the volume that is used by pods. Create the object definition file that you saved in the step. USD oc create -f my-csi-app.yaml 5.3. CSI volume snapshots This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in OpenShift Container Platform. Familiarity with persistent volumes is suggested. 5.3.1. Overview of CSI volume snapshots A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume. OpenShift Container Platform supports CSI volume snapshots by default. However, a specific CSI driver is required. With CSI volume snapshots, a cluster administrator can: Deploy a third-party CSI driver that supports snapshots. Create a new persistent volume claim (PVC) from an existing volume snapshot. Take a snapshot of an existing PVC. Restore a snapshot as a different PVC. Delete an existing volume snapshot. With CSI volume snapshots, an app developer can: Use volume snapshots as building blocks for developing application- or cluster-level storage backup solutions. Rapidly rollback to a development version. Use storage more efficiently by not having to make a full copy each time. Be aware of the following when using volume snapshots: Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. OpenShift Container Platform only ships with select CSI drivers. For CSI drivers that are not provided by an OpenShift Container Platform Driver Operator, it is recommended to use the CSI drivers provided by community or storage vendors . Follow the installation instructions provided by the CSI driver. CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for volume snapshots will likely use the csi-external-snapshotter sidecar. See documentation provided by the CSI driver for details. 5.3.2. CSI snapshot controller and sidecar OpenShift Container Platform provides a snapshot controller that is deployed into the control plane. In addition, your CSI driver vendor provides the CSI snapshot sidecar as a helper container that is installed during the CSI driver installation. The CSI snapshot controller and sidecar provide volume snapshotting through the OpenShift Container Platform API. These external components run in the cluster. The external controller is deployed by the CSI Snapshot Controller Operator. 5.3.2.1. External controller The CSI snapshot controller binds VolumeSnapshot and VolumeSnapshotContent objects. The controller manages dynamic provisioning by creating and deleting VolumeSnapshotContent objects. 5.3.2.2. External sidecar Your CSI driver vendor provides the csi-external-snapshotter sidecar. This is a separate helper container that is deployed with the CSI driver. The sidecar manages snapshots by triggering CreateSnapshot and DeleteSnapshot operations. Follow the installation instructions provided by your vendor. 5.3.3. About the CSI Snapshot Controller Operator The CSI Snapshot Controller Operator runs in the openshift-cluster-storage-operator namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the openshift-cluster-storage-operator namespace. 5.3.3.1. Volume snapshot CRDs During OpenShift Container Platform installation, the CSI Snapshot Controller Operator creates the following snapshot custom resource definitions (CRDs) in the snapshot.storage.k8s.io/v1 API group: VolumeSnapshotContent A snapshot taken of a volume in the cluster that has been provisioned by a cluster administrator. Similar to the PersistentVolume object, the VolumeSnapshotContent CRD is a cluster resource that points to a real snapshot in the storage back end. For manually pre-provisioned snapshots, a cluster administrator creates a number of VolumeSnapshotContent CRDs. These carry the details of the real volume snapshot in the storage system. The VolumeSnapshotContent CRD is not namespaced and is for use by a cluster administrator. VolumeSnapshot Similar to the PersistentVolumeClaim object, the VolumeSnapshot CRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of a VolumeSnapshot CRD with an appropriate VolumeSnapshotContent CRD. The binding is a one-to-one mapping. The VolumeSnapshot CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot. VolumeSnapshotClass Allows a cluster administrator to specify different attributes belonging to a VolumeSnapshot object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. The VolumeSnapshotClass CRD defines the parameters for the csi-external-snapshotter sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported. Dynamically provisioned snapshots use the VolumeSnapshotClass CRD to specify storage-provider-specific parameters to use when creating a snapshot. The VolumeSnapshotContentClass CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end. 5.3.4. Volume snapshot provisioning There are two ways to provision snapshots: dynamically and manually. 5.3.4.1. Dynamic provisioning Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a persistent volume claim. Parameters are specified using a VolumeSnapshotClass CRD. 5.3.4.2. Manual provisioning As a cluster administrator, you can manually pre-provision a number of VolumeSnapshotContent objects. These carry the real volume snapshot details available to cluster users. 5.3.5. Creating a volume snapshot When you create a VolumeSnapshot object, OpenShift Container Platform creates a volume snapshot. Prerequisites Logged in to a running OpenShift Container Platform cluster. A PVC created using a CSI driver that supports VolumeSnapshot objects. A storage class to provision the storage back end. No pods are using the persistent volume claim (PVC) that you want to take a snapshot of. Note Do not create a volume snapshot of a PVC if a pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Be sure to first tear down a running pod to ensure consistent snapshots. Procedure To dynamically create a volume snapshot: Create a file with the VolumeSnapshotClass object described by the following YAML: volumesnapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete 1 The name of the CSI driver that is used to create snapshots of this VolumeSnapshotClass object. The name must be the same as the Provisioner field of the storage class that is responsible for the PVC that is being snapshotted. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshotclass.yaml Create a VolumeSnapshot object: volumesnapshot-dynamic.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2 1 The request for a particular class by the volume snapshot. If the volumeSnapshotClassName setting is absent and there is a default volume snapshot class, a snapshot is created with the default volume snapshot class name. But if the field is absent and no default volume snapshot class exists, then no snapshot is created. 2 The name of the PersistentVolumeClaim object bound to a persistent volume. This defines what you want to create a snapshot of. Required for dynamically provisioning a snapshot. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshot-dynamic.yaml To manually provision a snapshot: Provide a value for the volumeSnapshotContentName parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above. volumesnapshot-manual.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1 1 The volumeSnapshotContentName parameter is required for pre-provisioned snapshots. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshot-manual.yaml Verification After the snapshot has been created in the cluster, additional details about the snapshot are available. To display details about the volume snapshot that was created, enter the following command: USD oc describe volumesnapshot mysnap The following example displays details about the mysnap volume snapshot: volumesnapshot.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: "2020-01-29T12:24:30Z" 2 readyToUse: true 3 restoreSize: 500Mi 1 The pointer to the actual storage content that was created by the controller. 2 The time when the snapshot was created. The snapshot contains the volume content that was available at this indicated time. 3 If the value is set to true , the snapshot can be used to restore as a new PVC. If the value is set to false , the snapshot was created. However, the storage back end needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes. To verify that the volume snapshot was created, enter the following command: USD oc get volumesnapshotcontent The pointer to the actual content is displayed. If the boundVolumeSnapshotContentName field is populated, a VolumeSnapshotContent object exists and the snapshot was created. To verify that the snapshot is ready, confirm that the VolumeSnapshot object has readyToUse: true . 5.3.6. Deleting a volume snapshot You can configure how OpenShift Container Platform deletes volume snapshots. Procedure Specify the deletion policy that you require in the VolumeSnapshotClass object, as shown in the following example: volumesnapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1 1 When deleting the volume snapshot, if the Delete value is set, the underlying snapshot is deleted along with the VolumeSnapshotContent object. If the Retain value is set, both the underlying snapshot and VolumeSnapshotContent object remain. If the Retain value is set and the VolumeSnapshot object is deleted without deleting the corresponding VolumeSnapshotContent object, the content remains. The snapshot itself is also retained in the storage back end. Delete the volume snapshot by entering the following command: USD oc delete volumesnapshot <volumesnapshot_name> Example output volumesnapshot.snapshot.storage.k8s.io "mysnapshot" deleted If the deletion policy is set to Retain , delete the volume snapshot content by entering the following command: USD oc delete volumesnapshotcontent <volumesnapshotcontent_name> Optional: If the VolumeSnapshot object is not successfully deleted, enter the following command to remove any finalizers for the leftover resource so that the delete operation can continue: Important Only remove the finalizers if you are confident that there are no existing references from either persistent volume claims or volume snapshot contents to the VolumeSnapshot object. Even with the --force option, the delete operation does not delete snapshot objects until all finalizers are removed. USD oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{"metadata": {"finalizers":null}}' Example output volumesnapshotclass.snapshot.storage.k8s.io "csi-ocs-rbd-snapclass" deleted The finalizers are removed and the volume snapshot is deleted. 5.3.7. Restoring a volume snapshot The VolumeSnapshot CRD content can be used to restore the existing volume to a state. After your VolumeSnapshot CRD is bound and the readyToUse value is set to true , you can use that resource to provision a new volume that is pre-populated with data from the snapshot. .Prerequisites * Logged in to a running OpenShift Container Platform cluster. * A persistent volume claim (PVC) created using a Container Storage Interface (CSI) driver that supports volume snapshots. * A storage class to provision the storage back end. * A volume snapshot has been created and is ready to use. Procedure Specify a VolumeSnapshot data source on a PVC as shown in the following: pvc-restore.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 1 Name of the VolumeSnapshot object representing the snapshot to use as source. 2 Must be set to the VolumeSnapshot value. 3 Must be set to the snapshot.storage.k8s.io value. Create a PVC by entering the following command: USD oc create -f pvc-restore.yaml Verify that the restored PVC has been created by entering the following command: USD oc get pvc A new PVC such as myclaim-restore is displayed. 5.4. CSI volume cloning Volume cloning duplicates an existing persistent volume to help protect against data loss in OpenShift Container Platform. This feature is only available with supported Container Storage Interface (CSI) drivers. You should be familiar with persistent volumes before you provision a CSI volume clone. 5.4.1. Overview of CSI volume cloning A Container Storage Interface (CSI) volume clone is a duplicate of an existing persistent volume at a particular point in time. Volume cloning is similar to volume snapshots, although it is more efficient. For example, a cluster administrator can duplicate a cluster volume by creating another instance of the existing cluster volume. Cloning creates an exact duplicate of the specified volume on the back-end device, rather than creating a new empty volume. After dynamic provisioning, you can use a volume clone just as you would use any standard volume. No new API objects are required for cloning. The existing dataSource field in the PersistentVolumeClaim object is expanded so that it can accept the name of an existing PersistentVolumeClaim in the same namespace. 5.4.1.1. Support limitations By default, OpenShift Container Platform supports CSI volume cloning with these limitations: The destination persistent volume claim (PVC) must exist in the same namespace as the source PVC. The source and destination storage class must be the same. Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. OpenShift Container Platform does not include any CSI drivers. Use the CSI drivers provided by community or storage vendors . Follow the installation instructions provided by the CSI driver. CSI drivers might not have implemented the volume cloning functionality. For details, see the CSI driver documentation. OpenShift Container Platform 4.7 supports version 1.1.0 of the CSI specification . 5.4.2. Provisioning a CSI volume clone When you create a cloned persistent volume claim (PVC) API object, you trigger the provisioning of a CSI volume clone. The clone pre-populates with the contents of another PVC, adhering to the same rules as any other persistent volume. The one exception is that you must add a dataSource that references an existing PVC in the same namespace. Prerequisites You are logged in to a running OpenShift Container Platform cluster. Your PVC is created using a CSI driver that supports volume cloning. Your storage back end is configured for dynamic provisioning. Cloning support is not available for static provisioners. Procedure To clone a PVC from an existing PVC: Create and save a file with the PersistentVolumeClaim object described by the following YAML: pvc-clone.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1 1 The name of the storage class that provisions the storage back end. The default storage class can be used and storageClassName can be omitted in the spec. Create the object you saved in the step by running the following command: USD oc create -f pvc-clone.yaml A new PVC pvc-1-clone is created. Verify that the volume clone was created and is ready by running the following command: USD oc get pvc pvc-1-clone The pvc-1-clone shows that it is Bound . You are now ready to use the newly cloned PVC to configure a pod. Create and save a file with the Pod object described by the YAML. For example: kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1 1 The cloned PVC created during the CSI volume cloning operation. The created Pod object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its original dataSource PVC. 5.5. AWS Elastic Block Store CSI Driver Operator 5.5.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic Block Store (EBS). Important AWS EBS CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to AWS EBS storage assets, OpenShift Container Platform installs the AWS EBS CSI Driver Operator and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers namespace. The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in Persistent storage using AWS Elastic Block Store . The AWS EBS CSI driver enables you to create and mount AWS EBS PVs. Note If you installed the AWS EBS CSI Operator and driver on a OpenShift Container Platform 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to OpenShift Container Platform 4.7. 5.5.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. Important OpenShift Container Platform defaults to using an in-tree, or non-CSI, driver to provision AWS EBS storage. This in-tree driver will be removed in a subsequent update of OpenShift Container Platform. Volumes provisioned using the existing in-tree driver are planned for migration to the CSI driver at that time. For information about dynamically provisioning AWS EBS persistent volumes in OpenShift Container Platform, see Persistent storage using AWS Elastic Block Store . Additional resources Persistent storage using AWS Elastic Block Store Configuring CSI volumes 5.6. GCP PD CSI Driver Operator 5.6.1. Overview OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage. Important GCP PD CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Container Platform installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers namespace. GCP PD CSI Driver Operator : By default, the Operator provides a storage class that you can use to create PVCs. You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk . GCP PD driver : The driver enables you to create and mount GCP PD PVs. Important OpenShift Container Platform defaults to using an in-tree, or non-CSI, driver to provision GCP PD storage. This in-tree driver will be removed in a subsequent update of OpenShift Container Platform. Volumes provisioned using the existing in-tree driver are planned for migration to the CSI driver at that time. 5.6.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. 5.6.3. GCP PD CSI driver storage class parameters The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume operation. The GCP PD CSI driver uses the csi.storage.k8s.io/fstype parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OpenShift Container Platform. Table 5.2. CreateVolume Parameters Parameter Values Default Description type pd-ssd or pd-standard pd-standard Allows you to choose between standard PVs or solid-state-drive PVs. replication-type none or regional-pd none Allows you to choose between zonal or regional PVs. disk-encryption-kms-key Fully qualified resource identifier for the key to use to encrypt new disks. Empty string Uses customer-managed encryption keys (CMEK) to encrypt new disks. 5.6.4. Creating a custom-encrypted persistent volume When you create a PersistentVolumeClaim object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV. For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key. Prerequisites You are logged in to a running OpenShift Container Platform cluster. You have created a Cloud KMS key ring and key version. For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK) . Procedure To create a custom-encrypted PV, complete the following steps: Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: "WaitForFirstConsumer" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1 1 This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource's ID and Getting a Cloud KMS resource ID . Note You cannot add the disk-encryption-kms-key parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must be pd.csi.storage.gke.io . Deploy the storage class on your OpenShift Container Platform cluster using the oc command: USD oc describe storageclass csi-gce-pd-cmek Example output Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none Create a file named pvc.yaml that matches the name of your storage class object that you created in the step: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi Note If you marked the new storage class as default, you can omit the storageClassName field. Apply the PVC on your cluster: USD oc apply -f pvc.yaml Get the status of your PVC and verify that it is created and bound to a newly provisioned PV: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s Note If your storage class has the volumeBindingMode field set to WaitForFirstConsumer , you must create a pod to use the PVC before you can verify it. Your CMEK-protected PV is now ready to use with your OpenShift Container Platform cluster. Additional resources Persistent storage using GCE Persistent Disk Configuring CSI volumes 5.7. OpenStack Cinder CSI Driver Operator 5.7.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for OpenStack Cinder. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to OpenStack Cinder storage assets, OpenShift Container Platform installs the OpenStack Cinder CSI Driver Operator and the OpenStack Cinder CSI driver in the openshift-cluster-csi-drivers namespace. The OpenStack Cinder CSI Driver Operator provides a CSI storage class that you can use to create PVCs. The OpenStack Cinder CSI driver enables you to create and mount OpenStack Cinder PVs. 5.7.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. 5.7.3. Making OpenStack Cinder CSI the default storage class In OpenShift Container Platform, the default storage class references the in-tree Cinder driver. The storage class will default to referencing OpenStack Cinder CSI in a subsequent update of OpenShift Container Platform. Volumes provisioned using the existing in-tree storage class are planned for migration to the OpenStack Cinder CSI storage class at that time. The OpenStack Cinder CSI driver uses the cinder.csi.openstack.org parameter key to support dynamic provisioning. To enable OpenStack Cinder CSI provisioning in OpenShift Container Platform, it is recommended that you overwrite the default in-tree storage class with standard-csi . Alternatively, you can create the persistent volume claim (PVC) and specify the storage class as "standard-csi". Procedure Use the following steps to apply the standard-csi storage class by overwriting the default in-tree storage class. List the storage class: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class, as shown in the following example: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true . USD oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify that the PVC is now referencing the CSI storage class by default: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h Optional: You can define a new PVC without having to specify the storage class: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi A PVC that does not specify a specific storage class is automatically provisioned by using the default storage class. Optional: After the new file has been configured, create it in your cluster: USD oc create -f cinder-claim.yaml Additional resources Configuring CSI volumes 5.8. OpenStack Manila CSI Driver Operator 5.8.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for the OpenStack Manila shared file system service. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to Manila storage assets, OpenShift Container Platform installs the Manila CSI Driver Operator and the Manila CSI driver by default on any OpenStack cluster that has the Manila service enabled. The Manila CSI Driver Operator creates the required storage class that is needed to create PVCs for all available Manila share types. The Operator is installed in the openshift-cluster-csi-drivers namespace. The Manila CSI driver enables you to create and mount Manila PVs. The driver is installed in the openshift-manila-csi-driver namespace. 5.8.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. 5.8.3. Manila CSI Driver Operator limitations The following limitations apply to the Manila Container Storage Interface (CSI) Driver Operator: Only NFS is supported OpenStack Manila supports many network-attached storage protocols, such as NFS, CIFS, and CEPHFS, and these can be selectively enabled in the OpenStack cloud. The Manila CSI Driver Operator in OpenShift Container Platform only supports using the NFS protocol. If NFS is not available and enabled in the underlying OpenStack cloud, you cannot use the Manila CSI Driver Operator to provision storage for OpenShift Container Platform. Snapshots are not supported if the back end is CephFS-NFS To take snapshots of persistent volumes (PVs) and revert volumes to snapshots, you must ensure that the Manila share type that you are using supports these features. A Red Hat OpenStack administrator must enable support for snapshots ( share type extra-spec snapshot_support ) and for creating shares from snapshots ( share type extra-spec create_share_from_snapshot_support ) in the share type associated with the storage class you intend to use. FSGroups are not supported Since Manila CSI provides shared file systems for access by multiple readers and multiple writers, it does not support the use of FSGroups. This is true even for persistent volumes created with the ReadWriteOnce access mode. It is therefore important not to specify the fsType attribute in any storage class that you manually create for use with Manila CSI Driver. Important In Red Hat OpenStack Platform 16.x and 17.x, the Shared File Systems service (Manila) with CephFS through NFS fully supports serving shares to OpenShift Container Platform through the Manila CSI. However, this solution is not intended for massive scale. Be sure to review important recommendations in CephFS NFS Manila-CSI Workload Recommendations for Red Hat OpenStack Platform . 5.8.4. Dynamically provisioning Manila CSI volumes OpenShift Container Platform installs a storage class for each available Manila share type. The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plug-in. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests. You can use the same pod and persistent volume claim (PVC) definitions on-premise that you use with OpenShift Container Platform on AWS, GCP, Azure, and other platforms, with the exception of the storage class reference in the PVC definition. Note Manila service is optional. If the service is not enabled in Red Hat OpenStack Platform (RHOSP), the Manila CSI driver is not installed and the storage classes for Manila are not created. Prerequisites RHOSP is deployed with appropriate Manila share infrastructure so that it can be used to dynamically provision and mount volumes in OpenShift Container Platform. Procedure (UI) To dynamically create a Manila CSI volume using the web console: In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the appropriate storage class. Enter a unique name for the storage claim. Select the access mode to specify read and write access for the PVC you are creating. Important Use RWX if you want the persistent volume (PV) that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. Procedure (CLI) To dynamically create a Manila CSI volume using the command-line interface (CLI): Create and save a file with the PersistentVolumeClaim object described by the following YAML: pvc-manila.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2 1 Use RWX if you want the persistent volume (PV) that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster. 2 The name of the storage class that provisions the storage back end. Manila storage classes are provisioned by the Operator and have the csi-manila- prefix. Create the object you saved in the step by running the following command: USD oc create -f pvc-manila.yaml A new PVC is created. To verify that the volume was created and is ready, run the following command: USD oc get pvc pvc-manila The pvc-manila shows that it is Bound . You can now use the new PVC to configure a pod. Additional resources Configuring CSI volumes 5.9. Red Hat Virtualization CSI Driver Operator 5.9.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Red Hat Virtualization (RHV). Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to RHV storage assets, OpenShift Container Platform installs the oVirt CSI Driver Operator and the oVirt CSI driver by default in the openshift-cluster-csi-drivers namespace. The oVirt CSI Driver Operator provides a default StorageClass object that you can use to create Persistent Volume Claims (PVCs). The oVirt CSI driver enables you to create and mount oVirt PVs. 5.9.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. Note The oVirt CSI driver does not support snapshots. 5.9.3. oVirt CSI driver storage class OpenShift Container Platform creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes. To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 annotations: storageclass.kubernetes.io/is-default-class: "false" 2 provisioner: csi.ovirt.org parameters: storageDomainName: <rhv-storage-domain-name> 3 thinProvisioning: "true" 4 csi.storage.k8s.io/fstype: ext4 5 1 Name of the storage class. 2 Set to false if the storage class is the default storage class in the cluster. If set to true , the existing default storage class must be edited and set to false . 3 RHV storage domain name to use. 4 Disk must be thin provisioned. 5 File system type to be created. 5.9.4. Creating a persistent volume on RHV When you create a PersistentVolumeClaim (PVC) object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object. Prerequisites You are logged in to a running OpenShift Container Platform cluster. You provided the correct RHV credentials in ovirt-credentials secret. You have installed the oVirt CSI driver. You have defined at least one storage class. Procedure If you are using the we console to dynamically create a persistent volume on RHV: In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the appropriate StorageClass object, which is ovirt-csi-sc by default. Enter a unique name for the storage claim. Select the access mode. Currently, RWO (ReadWriteOnce) is the only supported access mode. Define the size of the storage claim. Select the Volume Mode: Filesystem : Mounted into pods as a directory. This mode is the default. Block : Block device, without any file system on it Click Create to create the PersistentVolumeClaim object and generate a PersistentVolume object. If you are using the command-line interface (CLI) to dynamically create a RHV CSI volume: Create and save a file with the PersistentVolumeClaim object described by the following sample YAML: pvc-ovirt.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3 1 Name of the required storage class. 2 Volume size in GiB. 3 Supported options: Filesystem : Mounted into pods as a directory. This mode is the default. Block : Block device, without any file system on it. Create the object you saved in the step by running the following command: To verify that the volume was created and is ready, run the following command: The pvc-ovirt shows that it is Bound. Additional resources Configuring CSI volumes Dynamic Provisioning
[ "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar", "oc create -f my-csi-app.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete", "oc create -f volumesnapshotclass.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2", "oc create -f volumesnapshot-dynamic.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1", "oc create -f volumesnapshot-manual.yaml", "oc describe volumesnapshot mysnap", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi", "oc get volumesnapshotcontent", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1", "oc delete volumesnapshot <volumesnapshot_name>", "volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted", "oc delete volumesnapshotcontent <volumesnapshotcontent_name>", "oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f pvc-restore.yaml", "oc get pvc", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1", "oc create -f pvc-clone.yaml", "oc get pvc pvc-1-clone", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1", "oc describe storageclass csi-gce-pd-cmek", "Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi", "oc apply -f pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f cinder-claim.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2", "oc create -f pvc-manila.yaml", "oc get pvc pvc-manila", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"false\" 2 provisioner: csi.ovirt.org parameters: storageDomainName: <rhv-storage-domain-name> 3 thinProvisioning: \"true\" 4 csi.storage.k8s.io/fstype: ext4 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3", "oc create -f pvc-ovirt.yaml", "oc get pvc pvc-ovirt" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/storage/using-container-storage-interface-csi
Chapter 3. Customizing the installation media
Chapter 3. Customizing the installation media For details, see Composing a customized RHEL system image .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/customizing-the-installation-media_rhel-installer
Chapter 37. Using Ansible playbooks to manage RBAC permissions in IdM
Chapter 37. Using Ansible playbooks to manage RBAC permissions in IdM Role-based access control (RBAC) is a policy-neutral access control mechanism defined around roles, privileges, and permissions. Especially in large companies, using RBAC can help create a hierarchical system of administrators with their individual areas of responsibility. This chapter describes the following operations performed when managing RBAC permissions in Identity Management (IdM) using Ansible playbooks: Using Ansible to ensure an RBAC permission is present Using Ansible to ensure an RBAC permission with an attribute is present Using Ansible to ensure an RBAC permission is absent Using Ansible to ensure an attribute is a member of an IdM RBAC permission Using Ansible to ensure an attribute is not a member of an IdM RBAC permission Using Ansible to rename an IdM RBAC permission Prerequisites You understand the concepts and principles of RBAC . 37.1. Using Ansible to ensure an RBAC permission is present As a system administrator of Identity Management (IdM), you can customize the IdM role-based access control (RBAC). The following procedure describes how to use an Ansible playbook to ensure a permission is present in IdM so that it can be added to a privilege. The example describes how to ensure the following target state: The MyPermission permission exists. The MyPermission permission can only be applied to hosts. A user granted a privilege that contains the permission can do all of the following possible operations on an entry: Write Read Search Compare Add Delete Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the permission-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/permission/ directory: Open the permission-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipapermission task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the permission. Set the object_type variable to host . Set the right variable to all . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: 37.2. Using Ansible to ensure an RBAC permission with an attribute is present As a system administrator of Identity Management (IdM), you can customize the IdM role-based access control (RBAC). The following procedure describes how to use an Ansible playbook to ensure a permission is present in IdM so that it can be added to a privilege. The example describes how to ensure the following target state: The MyPermission permission exists. The MyPermission permission can only be used to add hosts. A user granted a privilege that contains the permission can do all of the following possible operations on a host entry: Write Read Search Compare Add Delete The host entries created by a user that is granted a privilege that contains the MyPermission permission can have a description value. Note The type of attribute that you can specify when creating or modifying a permission is not constrained by the IdM LDAP schema. However, specifying, for example, attrs: car_licence if the object_type is host later results in the ipa: ERROR: attribute "car-license" not allowed error message when you try to exercise the permission and add a specific car licence value to a host. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the permission-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/permission/ directory: Open the permission-present-with-attribute.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipapermission task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the permission. Set the object_type variable to host . Set the right variable to all . Set the attrs variable to description . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See User and group schema in Linux Domain Identity, Authentication and Policy Guide in RHEL 7. 37.3. Using Ansible to ensure an RBAC permission is absent As a system administrator of Identity Management (IdM), you can customize the IdM role-based access control (RBAC). The following procedure describes how to use an Ansible playbook to ensure a permission is absent in IdM so that it cannot be added to a privilege. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the permission-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/permission/ directory: Open the permission-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipapermission task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the permission. This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: 37.4. Using Ansible to ensure an attribute is a member of an IdM RBAC permission As a system administrator of Identity Management (IdM), you can customize the IdM role-based access control (RBAC). The following procedure describes how to use an Ansible playbook to ensure that an attribute is a member of an RBAC permission in IdM. As a result, a user with the permission can create entries that have the attribute. The example describes how to ensure that the host entries created by a user with a privilege that contains the MyPermission permission can have gecos and description values. Note The type of attribute that you can specify when creating or modifying a permission is not constrained by the IdM LDAP schema. However, specifying, for example, attrs: car_licence if the object_type is host later results in the ipa: ERROR: attribute "car-license" not allowed error message when you try to exercise the permission and add a specific car licence value to a host. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The MyPermission permission exists. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the permission-member-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/permission/ directory: Open the permission-member-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipapermission task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the permission. Set the attrs list to the description and gecos variables. Make sure the action variable is set to member . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: 37.5. Using Ansible to ensure an attribute is not a member of an IdM RBAC permission As a system administrator of Identity Management (IdM), you can customize the IdM role-based access control (RBAC). The following procedure describes how to use an Ansible playbook to ensure that an attribute is not a member of an RBAC permission in IdM. As a result, when a user with the permission creates an entry in IdM LDAP, that entry cannot have a value associated with the attribute. The example describes how to ensure the following target state: The MyPermission permission exists. The host entries created by a user with a privilege that contains the MyPermission permission cannot have the description attribute. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The MyPermission permission exists. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the permission-member-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/permission/ directory: Open the permission-member-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipapermission task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the permission. Set the attrs variable to description . Set the action variable to member . Make sure the state variable is set to absent This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: 37.6. Using Ansible to rename an IdM RBAC permission As a system administrator of Identity Management (IdM), you can customize the IdM role-based access control. The following procedure describes how to use an Ansible playbook to rename a permission. The example describes how to rename MyPermission to MyNewPermission . Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The MyPermission exists in IdM. The MyNewPermission does not exist in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the permission-renamed.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/permission/ directory: Open the permission-renamed-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipapermission task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the permission. This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: 37.7. Additional resources See Permissions in IdM . See Privileges in IdM . See the README-permission file available in the /usr/share/doc/ansible-freeipa/ directory. See the sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/ipapermission directory.
[ "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-present.yml permission-present-copy.yml", "--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is present ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission object_type: host right: all", "ansible-playbook --vault-password-file=password_file -v -i inventory permission-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-present.yml permission-present-with-attribute.yml", "--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is present with an attribute ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission object_type: host right: all attrs: description", "ansible-playbook --vault-password-file=password_file -v -i inventory permission-present-with-attribute.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-absent.yml permission-absent-copy.yml", "--- - name: Permission absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is absent ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory permission-absent-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-member-present.yml permission-member-present-copy.yml", "--- - name: Permission member present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"gecos\" and \"description\" attributes are present in \"MyPermission\" ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission attrs: - description - gecos action: member", "ansible-playbook --vault-password-file=password_file -v -i inventory permission-member-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-member-absent.yml permission-member-absent-copy.yml", "--- - name: Permission absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that an attribute is not a member of \"MyPermission\" ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission attrs: description action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory permission-member-absent-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-renamed.yml permission-renamed-copy.yml", "--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Rename the \"MyPermission\" permission ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission rename: MyNewPermission state: renamed", "ansible-playbook --vault-password-file=password_file -v -i inventory permission-renamed-copy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ansible-playbooks-to-manage-rbac-permissions-in-idm_managing-users-groups-hosts
Chapter 31. Red Hat Enterprise Linux Atomic Host 7.3.5
Chapter 31. Red Hat Enterprise Linux Atomic Host 7.3.5 31.1. Atomic Host OStree update : New Tree Version: 7.3.5 (hash: 0ccf9138962e5c2c3794969a228e751d13bb780f5b0a1f15f4a9649df06ba80a) Changes since Tree Version 7.3.4-1 (hash: d6c7a5639cdeb6c21cf40d80259d516d047176e35411c8684cae40a93eedbed0) Updated packages : cockpit-ostree-138-5.el7 redhat-release-atomic-host-7.3-20161129.0.atomic.el7.5 rpm-ostree-client-2017.5-1.atomic.el7 31.2. Extras Updated packages : atomic-1.17.2-3.git2760e30.el7 cockpit-138-6.el7 container-selinux-2.12-2.gite7096ce.el7 docker-1.12.6-28.git1398f24.el7 docker-distribution-2.6.1-1.el7 docker-latest-1.13.1-11.git3a17ad5.el7 etcd-3.1.7-1.el7 kubernetes-1.5.2-0.6.gitd33fd89.el7 * ostree-2017.5-1.el7 skopeo-0.1.19-1.el7 WALinuxAgent-2.2.10-1.el7 The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 31.2.1. Container Images Updated : Red Hat Enterprise Linux 7.3 Container Image (rhel7.3, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Kubernetes apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes controller-manager Container (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes scheduler Container Image (rhel7/kubernetes-scheduler) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) New : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) 31.3. New Features Red Hat Enterprise Linux 7 Init Container Image is now available The new Red Hat Enterprise Linux 7 Init Image allows creating containerized services based on the systemd init system. This container image configures systemd in an OCI container and enables running one or more services in a RHEL7 user space using unit files, init scripts, or both. For details on using rhev7-init , see Using the Atomic RHEL7 Init Container Image in the Managing Containers Guide.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_3_5
Chapter 2. Creating a JFR recording in the Cryostat web console
Chapter 2. Creating a JFR recording in the Cryostat web console You can create a JFR recording that monitors the performance of your JVM located in your containerized application. After you create a JFR recording, you can start the JFR to capture real-time data for your JVM, such as heap and non-heap memory usage. Prerequisites Installed Cryostat 3.0 on Red Hat OpenShift by using the OperatorHub option. Created a Cryostat instance in your Red Hat OpenShift project. Logged in to your Cryostat web console. You can retrieve your Cryostat application's URL by using the Red Hat OpenShift web console. Procedure On the Dashboard panel for your Cryostat web console, select a target JVM from the Target list. Note Depending on how you configured your target applications, your target JVMs might be using a JMX connection or an agent HTTP connection. For more information about configuring your target applications, see Configuring Java applications . Important If your target JVM is using an agent HTTP connection, ensure that you set the cryostat.agent.api.writes-enabled property to true when you configured your target application to load the Cryostat agent. Otherwise, the Cryostat agent cannot accept requests to start and stop JFR recordings. Figure 2.1. Example of selecting a Target JVM for your Cryostat instance Optional: On the Dashboard panel, you can create a target JVM. From the Target list, click Create Target . The Create Custom Target window opens. In the Connection URL field, enter the URL for your JVM's Java Management Extension (JMX) endpoint. Optional: To test if the Connection URL that you specified is valid, click the Click to test sample node image. If there is an issue with the Connection URL , an error message is displayed that provides a description of the issue and guidance to troubleshoot. Optional: In the Alias field, enter an alias for your JMX Service URL. Click Create . Figure 2.2. Create Custom Target window From the navigation menu on the Cryostat web console, click Recordings . Optional: Depending on how you configured your target JVM, an Authentication Required dialog might open on your web console. In the Authentication Required dialog box, enter your Username and Password . To provide your credentials to the target JVM, click Save . Figure 2.3. Example of a Cryostat Authentication Required window Note If the selected target JMX has Secure Socket Layer (SSL) certification enabled for JMX connections, you must add its certificate when prompted. Cryostat encrypts and stores credentials for a target JVM application in a database that is stored on a persistent volume claim (PVC) on Red Hat OpenShift. See Storing and managing credentials (Using Cryostat to manage a JFR recording). On the Active Recordings tab, click Create . Figure 2.4. Example of creating an active recording On the Custom Flight Recording tab: In the Name field, enter the name of the recording you want to create. If you enter a name in an invalid format, the web console displays an error message. If you want Cryostat to automatically restart an existing recording, select the Restart if recording already exists check box. Note If you enter a name that already exists but you do not select Restart if recording already exists , Cryostat refuses to create a custom recording when you click the Create button. In the Duration field, select whether you want this recording to stop after a specified duration or to run continuously without stopping. If you want Cryostat to automatically archive your new JFR recording after the recording stops, click Archive on Stop . In the Template field, select the template that you want to use for the recording. The following example shows continuous JVM monitoring, which you can enable by selecting Continuous from above the Duration field. This setting means that the recording will continue until you manually stop the recording. The example also shows selection of the Profiling template from the Template field. This provides additional JVM information to a JFR recording for troubleshooting purposes. Figure 2.5. Example of creating a custom flight recording To access more options, click the following expandable hyperlinks: Show advanced options , where you can select additional options for customizing your JFR recording. Show metadata options , where you can add custom labels and metadata to your JFR recording. To create your JFR recording, click Create . The Active Recordings tab opens and lists your JFR recording. Your active JFR recording starts collecting data on the target JVM location inside your containerized application. If you specified a fixed duration for your JFR recordings, the target JVM stops the recording when it reaches the fixed duration setting. Otherwise, you must manually stop the recording. Optional: On the Active Recording tab, you can also stop the recording. Select the checkbox to the JFR recording's name. On the toolbar in the Active Recordings tab, the Cryostat web console activates the Stop button. Click Stop . The JFR adopts the STOPPED status, so it stops monitoring the target JVM. The JFR still shows under the Active Recording tab. Figure 2.6. Example of stopping an active recording Important JFR recording data might be lost in the following situations: Target JVM fails Target JVM restarts Target JVM Red Hat OpenShift Deployment is scaled down Archive your JFR recordings to ensure you do not lose your JFR recording's data. Additional resources See Uploading an SSL certificate (Using Cryostat to manage a JFR recording). See Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording).
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/creating_a_jfr_recording_with_cryostat/create_recording
2.4. Storage Formats for Virtual Disks
2.4. Storage Formats for Virtual Disks QCOW2 Formatted Virtual Machine Storage QCOW2 is a storage format for virtual disks. QCOW stands for QEMU copy-on-write. The QCOW2 format decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. Each logical block is mapped to its physical offset, which enables storage over-commitment and virtual machine snapshots, where each QCOW volume only represents changes made to an underlying virtual disk. The initial mapping points all logical blocks to the offsets in the backing file or volume. When a virtual machine writes data to a QCOW2 volume after a snapshot, the relevant block is read from the backing volume, modified with the new information and written into a new snapshot QCOW2 volume. Then the map is updated to point to the new place. Raw The raw storage format has a performance advantage over QCOW2 in that no formatting is applied to virtual disks stored in the raw format. Virtual machine data operations on virtual disks stored in raw format require no additional work from hosts. When a virtual machine writes data to a given offset in its virtual disk, the I/O is written to the same offset on the backing file or logical volume. Raw format requires that the entire space of the defined image be preallocated unless using externally managed thin provisioned LUNs from a storage array.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/qcow2
3.3. Spring Framework 3.2
3.3. Spring Framework 3.2 The infinispan-spring3 module has been deprecated in JBoss Data Grid 6.6.0 as the Spring Framework 3.2.x line is reaching End-Of-Life at the end of 2016, and is expected to be removed in version 7.0.0. The integration with Spring Framework 4.x will continue to be supported in version 7.0.0, via the infinispan-spring4 module. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/spring_framework_3.2
Chapter 40. Azure Storage Blob Service Component
Chapter 40. Azure Storage Blob Service Component Available as of Camel version 2.19 The Azure Blob component supports storing and retrieving the blobs to/from Azure Storage Blob service. Prerequisites You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal . 40.1. URI Format azure-blob://accountName/containerName[/blobName][?options] In most cases a blobName is required and the blob will be created if it does not already exist. You can append query options to the URI in the following format, ?options=value&option2=value&... For example in order to download a blob content from the public block blob blockBlob located on the container1 in the camelazure storage account, use the following snippet: from("azure-blob:/camelazure/container1/blockBlob"). to("file://blobdirectory"); 40.2. URI Options The Azure Storage Blob Service component has no options. The Azure Storage Blob Service endpoint is configured using URI syntax: with the following path and query parameters: 40.2.1. Path Parameters (1 parameters): Name Description Default Type containerOrBlobUri Required Container or Blob compact Uri String 40.2.2. Query Parameters (19 parameters): Name Description Default Type azureBlobClient (common) The blob service client CloudBlob blobOffset (common) Set the blob offset for the upload or download operations, default is 0 0 Long blobType (common) Set a blob type, 'blockblob' is default blockblob BlobType closeStreamAfterRead (common) Close the stream after read or keep it open, default is true true boolean credentials (common) Set the storage credentials, required in most cases StorageCredentials dataLength (common) Set the data length for the download or page blob upload operations Long fileDir (common) Set the file directory where the downloaded blobs will be saved to String publicForRead (common) Storage resources can be public for reading their content, if this property is enabled then the credentials do not have to be set false boolean streamReadSize (common) Set the minimum read size in bytes when reading the blob content int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern blobMetadata (producer) Set the blob meta-data Map blobPrefix (producer) Set a prefix which can be used for listing the blobs String closeStreamAfterWrite (producer) Close the stream after write or keep it open, default is true true boolean operation (producer) Blob service operation hint to the producer listBlobs BlobServiceOperations streamWriteSize (producer) Set the size of the buffer for writing block and page blocks int useFlatListing (producer) Specify if the flat or hierarchical blob listing should be used true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 40.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.azure-blob.enabled Enable azure-blob component true Boolean camel.component.azure-blob.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean Required Azure Storage Blob Service component options You have to provide the containerOrBlob name and the credentials if the private blob needs to be accessed. 40.4. Usage 40.4.1. Message headers evaluated by the Azure Storage Blob Service producer Header Type Description 40.4.2. Message headers set by the Azure Storage Blob Service producer Header Type Description CamelFileName String The file name for the downloaded blob content. 40.4.3. Message headers set by the Azure Storage Blob Service producer consumer Header Type Description CamelFileName String The file name for the downloaded blob content. 40.4.4. Azure Blob Service operations Operations common to all block types Operation Description getBlob Get the content of the blob. You can restrict the output of this operation to a blob range. deleteBlob Delete the blob. listBlobs List the blobs. Block blob operations Operation Description updateBlockBlob Put block blob content that either creates a new block blob or overwrites the existing block blob content. uploadBlobBlocks Upload block blob content, by first generating a sequence of blob blocks and then committing them to a blob. If you enable the message CommitBlockListLater property, you can execute the commit later with the commitBlobBlockList operation. You can later update individual block blobs. commitBlobBlockList Commit a sequence of blob blocks to the block list that you previously uploaded to the blob service (by using the updateBlockBlob operation with the message CommitBlockListLater property enabled). getBlobBlockList Get the block blob list. Append blob operations Operation Description createAppendBlob Create an append block. By default, if the block already exists then it is not reset. Note that you can alternately create an append blob by enabling the message AppendBlobCreated property and using the updateAppendBlob operation. updateAppendBlob Append the new content to the blob. This operation also creates the blob if it does not already exist and if you enabled a message AppendBlobCreated property. Page Block operations Operation Description createPageBlob Create a page block. By default, if the block already exists then it is not reset. Note that you can also create a page blob (and set its contents) by enabling a message PageBlobCreated property and by using the updatePageBlob operation. updatePageBlob Create a page block (unless you enable a message PageBlobCreated property and the identically named block already exists) and set the content of this blob. resizePageBlob Resize the page blob. clearPageBlob Clear the page blob. getPageBlobRanges Get the page blob page ranges. 40.4.5. Azure Blob Client configuration If your Camel Application is running behind a firewall or if you need to have more control over the Azure Blob Client configuration, you can create your own instance: StorageCredentials credentials = new StorageCredentialsAccountAndKey("camelazure", "thekey"); CloudBlob client = new CloudBlob("camelazure", credentials); registry.bind("azureBlobClient", client); and refer to it in your Camel azure-blob component configuration: from("azure-blob:/camelazure/container1/blockBlob?azureBlobClient=#client") .to("mock:result"); 40.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-azure</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.19.0 or higher). 40.6. See Also Configuring Camel Component Endpoint Getting Started Azure Component
[ "azure-blob://accountName/containerName[/blobName][?options]", "from(\"azure-blob:/camelazure/container1/blockBlob\"). to(\"file://blobdirectory\");", "azure-blob:containerOrBlobUri", "StorageCredentials credentials = new StorageCredentialsAccountAndKey(\"camelazure\", \"thekey\"); CloudBlob client = new CloudBlob(\"camelazure\", credentials); registry.bind(\"azureBlobClient\", client);", "from(\"azure-blob:/camelazure/container1/blockBlob?azureBlobClient=#client\") .to(\"mock:result\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-azure</artifactId> <version>USD{camel-version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/azure-blob-component
2.7. Create a Materialized View for Code Table Caching
2.7. Create a Materialized View for Code Table Caching Procedure 2.1. Create a Materialized View for Code Table Caching Create a view selecting the appropriate columns from the desired table. In general, this view may have an arbitrarily complicated transformation query. Designate the appropriate column(s) as the primary key. Additional indexes can be added if needed. Set the materialized property to true. Add a cache hint to the transformation query. To mimic the behavior of the implicit internal materialized view created by the lookup function, use the Hints and Options /*+ cache(pref_mem) */ to indicate that the table data pages should prefer to remain in memory. Result Just as with the lookup function, the materialized view table will be created on first use and reused subsequently.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/create_a_materialized_view_for_code_table_caching
7.15. binutils
7.15. binutils 7.15.1. RHBA-2013:0498 - binutils bug fix update Updated binutils packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The binutils packages provide a set of binary utilities, including "ar" (for creating, modifying and extracting from archives), "as" (a family of GNU assemblers), "gprof" (for displaying call graph profile data), "ld" (the GNU linker), "nm" (for listing symbols from object files), "objcopy" (for copying and translating object files), "objdump" (for displaying information from object files), "ranlib" (for generating an index for the contents of an archive), "readelf" (for displaying detailed information about binary files), "size" (for listing the section sizes of an object or archive file), "strings" (for listing printable strings from files), "strip" (for discarding symbols), and "addr2line" (for converting addresses to file and line). Bug Fixes BZ#773526 In order to display a non-printing character, the readelf utility adds the "0x40" string to the character. However, readelf previously did not add that string when processing multibyte characters, so that multibyte characters in the ELF headers were displayed incorrectly. With this update, the underlying code has been corrected and readelf now displays multibyte and non-ASCII characters correctly. BZ#825736 Under certain circumstances, the linker could fail to produce the GNU_RELRO segment when building an executable requiring GNU_RELRO. As a consequence, such an executable failed upon start-up. This problem affected also the libudev library so that the udev utility did not work. With this update, the linker has been modified so that the GNU_RELRO segment is now correctly created when it is needed, and utilities such as udev now work correctly. All users of binutils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/binutils
Chapter 6. Configuring fencing for an HA cluster on Red Hat OpenStack Platform
Chapter 6. Configuring fencing for an HA cluster on Red Hat OpenStack Platform Fencing configuration ensures that a malfunctioning node on your HA cluster is automatically isolated. This prevents the node from consuming the cluster's resources or compromising the cluster's functionality. Use the fence_openstack fence agent to configure a fence device for an HA cluster on RHOSP. You can view the options for the RHOSP fence agent with the following command. Prerequisites A configured HA cluster running on RHOSP Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP The cluster property stonith-enabled set to true , which is the default value. Red Hat does not support clusters when fencing is disabled, as it is not suitable for a production environment. Run the following command to ensure that fencing is enbaled. Procedure Complete the following steps from any node in the cluster. Determine the UUID for each node in your cluster. The following command displays the full list of all of the RHOSP instance names within the ha-example project along with the UUID for the cluster node associated with that RHOSP instance, under the heading ID . The node host name might not match the RHOSP instance name. Create the fencing device, using the pcmk_host_map parameter to map each node in the cluster to the UUID for that node. Each of the following example fence device creation commands uses a different authentication method. The following command creates a fence_openstack fencing device for a 3-node cluster, using a clouds.yaml configuration file for authentication. For the cloud= parameter , specify the name of the cloud in your clouds.yaml` file. The following command creates a fence_openstack fencing device, using an OpenRC environment script for authentication. The following command creates a fence_openstack fencing device, using a user name and password for authentication. The authentication parameters, including username , password , project_name , and auth_url , are provided by the RHOSP administrator. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Configuring ACPI For use with integrated fence devices . Verification From one node in the cluster, fence a different node in the cluster and check the cluster status. If the fenced node is offline, the fencing operation was successful. Restart the node that you fenced and check the status to verify that the node started.
[ "pcs stonith describe fence_openstack", "pcs property config --all Cluster Properties: . . . stonith-enabled: true", "openstack --os-cloud=\"ha-example\" server list ... | ID | Name | | 6d86fa7d-b31f-4f8a-895e-b3558df9decb|testnode-node03-vm| | 43ed5fe8-6cc7-4af0-8acd-a4fea293bc62|testnode-node02-vm| | 4df08e9d-2fa6-4c04-9e66-36a6f002250e|testnode-node01-vm|", "pcs stonith create fenceopenstack fence_openstack pcmk_host_map=\"node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb\" power_timeout=\"240\" pcmk_reboot_timeout=\"480\" pcmk_reboot_retries=\"4\" cloud=\"ha-example\"", "pcs stonith create fenceopenstack fence_openstack pcmk_host_map=\"node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb\" power_timeout=\"240\" pcmk_reboot_timeout=\"480\" pcmk_reboot_retries=\"4\" openrc=\"/root/openrc\"", "pcs stonith create fenceopenstack fence_openstack pcmk_host_map=\"node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb\" power_timeout=\"240\" pcmk_reboot_timeout=\"480\" pcmk_reboot_retries=\"4\" username=\"XXX\" password=\"XXX\" project_name=\"rhelha\" auth_url=\"XXX\" user_domain_name=\"Default\"", "pcs stonith fence node02 pcs status", "pcs cluster start node02 pcs status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/configuring-fencing-for-an-ha-cluster-on-red-hat-openstack-platform_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
Chapter 4. Configuring traffic ingress
Chapter 4. Configuring traffic ingress 4.1. Configuring SSL/TLS and Routes Support for OpenShift Container Platform edge termination routes have been added by way of a new managed component, tls . This separates the route component from SSL/TLS and allows users to configure both separately. EXTERNAL_TLS_TERMINATION: true is the opinionated setting. Note Managed tls means that the default cluster wildcard certificate is used. Unmanaged tls means that the user provided key and certificate pair is be injected into the route. The ssl.cert and ssl.key are now moved to a separate, persistent secret, which ensures that the key and certificate pair are not regenerated upon every reconcile. The key and certificate pair are now formatted as edge routes and mounted to the same directory in the Quay container. Multiple permutations are possible when configuring SSL/TLS and routes, but the following rules apply: If SSL/TLS is managed , then your route must also be managed . If SSL/TLS is unmanaged then you must supply certificates directly in the config bundle. The following table describes the valid options: Table 4.1. Valid configuration options for TLS and routes Option Route TLS Certs provided Result My own load balancer handles TLS Managed Managed No Edge route with default wildcard cert Red Hat Quay handles TLS Managed Unmanaged Yes Passthrough route with certs mounted inside the pod Red Hat Quay handles TLS Unmanaged Unmanaged Yes Certificates are set inside of the quay pod, but the route must be created manually 4.1.1. Creating the config bundle secret with the SSL/TLS cert and key pair Use the following procedure to create a config bundle secret that includes your own SSL/TLS certificate and key pair. Procedure Enter the following command to create config bundle secret that includes your own SSL/TLS certificate and key pair: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
[ "oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-traffic-ingress
9.2. Command Logging API
9.2. Command Logging API If you want to build a custom appender for command logging that will have access to java.util.logging.LogRecord s to the "COMMAND_LOG" context, the handler will receive a message that is an instance of LogRecord . This object will contain a parameter of type org.teiid.logging.CommandLogMessage . The relevant Red Hat JBoss Data Virtualization classes are defined in the teiid-api-[versionNumber].jar . The CommandLogMessage includes information about VDB, session, command SQL, etc. CommandLogMessages are logged at the DEBUG level. An example follows.
[ "package org.something; import java.util.logging.Handler; import java.util.logging.LogRecord; public class CommandHandler extends Handler { @Override public void publish(LogRecord record) { CommandLogMessage msg = (CommandLogMessage)record.getParameters()[0]; //log to a database, trigger an email, etc. } @Override public void flush() { } @Override public void close() throws SecurityException { } }" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/command_logging_api
5.7. Working with Zones
5.7. Working with Zones Zones represent a concept to manage incoming traffic more transparently. The zones are connected to networking interfaces or assigned a range of source addresses. You manage firewall rules for each zone independently, which enables you to define complex firewall settings and apply them to the traffic. 5.7.1. Listing Zones To see which zones are available on your system: The firewall-cmd --get-zones command displays all zones that are available on the system, but it does not show any details for particular zones. To see detailed information for all zones: To see detailed information for a specific zone: 5.7.2. Modifying firewalld Settings for a Certain Zone The Section 5.6.3, "Controlling Traffic with Predefined Services using CLI" and Section 5.6.6, "Controlling Ports using CLI" explain how to add services or modify ports in the scope of the current working zone. Sometimes, it is required to set up rules in a different zone. To work in a different zone, use the --zone= zone-name option. For example, to allow the SSH service in the zone public : 5.7.3. Changing the Default Zone System administrators assign a zone to a networking interface in its configuration files. If an interface is not assigned to a specific zone, it is assigned to the default zone. After each restart of the firewalld service, firewalld loads the settings for the default zone and makes it active. To set up the default zone: Display the current default zone: Set the new default zone: Note Following this procedure, the setting is a permanent setting, even without the --permanent option. 5.7.4. Assigning a Network Interface to a Zone It is possible to define different sets of rules for different zones and then change the settings quickly by changing the zone for the interface that is being used. With multiple interfaces, a specific zone can be set for each of them to distinguish traffic that is coming through them. To assign the zone to a specific interface: List the active zones and the interfaces assigned to them: Assign the interface to a different zone: Note You do not have to use the --permanent option to make the setting persistent across restarts. If you set a new default zone, the setting becomes permanent. 5.7.5. Assigning a Default Zone to a Network Connection When the connection is managed by NetworkManager , it must be aware of a zone that it uses. For every network connection, a zone can be specified, which provides the flexibility of various firewall settings according to the location of the computer with portable devices. Thus, zones and settings can be specified for different locations, such as company or home. To set a default zone for an Internet connection, use either the NetworkManager GUI or edit the /etc/sysconfig/network-scripts/ifcfg- connection-name file and add a line that assigns a zone to this connection: 5.7.6. Creating a New Zone To use custom zones, create a new zone and use it just like a predefined zone. Note New zones require the --permanent option, otherwise the command does not work. Create a new zone: Reload the new zone: Check if the new zone is added to your permanent settings: Make the new settings persistent: 5.7.7. Creating a New Zone using a Configuration File Zones can also be created using a zone configuration file . This approach can be helpful when you need to create a new zone, but want to reuse the settings from a different zone and only alter them a little. A firewalld zone configuration file contains the information for a zone. These are the zone description, services, ports, protocols, icmp-blocks, masquerade, forward-ports and rich language rules in an XML file format. The file name has to be zone-name .xml where the length of zone-name is currently limited to 17 chars. The zone configuration files are located in the /usr/lib/firewalld/zones/ and /etc/firewalld/zones/ directories. The following example shows a configuration that allows one service ( SSH ) and one port range, for both the TCP and UDP protocols.: To change settings for that zone, add or remove sections to add ports, forward ports, services, and so on. For more information, see the firewalld.zone manual pages. 5.7.8. Using Zone Targets to Set Default Behavior for Incoming Traffic For every zone, you can set a default behavior that handles incoming traffic that is not further specified. Such behaviour is defined by setting the target of the zone. There are three options - default , ACCEPT , REJECT , and DROP . By setting the target to ACCEPT , you accept all incoming packets except those disabled by a specific rule. If you set the target to REJECT or DROP , you disable all incoming packets except those that you have allowed in specific rules. When packets are rejected, the source machine is informed about the rejection, while there is no information sent when the packets are dropped. To set a target for a zone: List the information for the specific zone to see the default target: Set a new target in the zone:
[ "~]# firewall-cmd --get-zones", "~]# firewall-cmd --list-all-zones", "~]# firewall-cmd --zone= zone-name --list-all", "~]# firewall-cmd --add-service=ssh --zone= public", "~]# firewall-cmd --get-default-zone", "~]# firewall-cmd --set-default-zone zone-name", "~]# firewall-cmd --get-active-zones", "~]# firewall-cmd --zone= zone-name --change-interface=<interface-name>", "ZONE= zone-name", "~]# firewall-cmd --permanent --new-zone= zone-name", "~]# firewall-cmd --reload", "~]# firewall-cmd --get-zones", "~]# firewall-cmd --runtime-to-permanent", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>My zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name=\"ssh\"/> <port port=\"1025-65535\" protocol=\"tcp\"/> <port port=\"1025-65535\" protocol=\"udp\"/> </zone>", "~]USD firewall-cmd --zone= zone-name --list-all", "~]# firewall-cmd --zone= zone-name --set-target=<default|ACCEPT|REJECT|DROP>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-working_with_zones
4.8. The file_t and default_t Types
4.8. The file_t and default_t Types When using a file system that supports extended attributes (EA), the file_t type is the default type of a file that has not yet been assigned EA value. This type is only used for this purpose and does not exist on correctly-labeled file systems, because all files on a system running SELinux should have a proper SELinux context, and the file_t type is never used in file-context configuration [4] . The default_t type is used on files that do not match any pattern in file-context configuration, so that such files can be distinguished from files that do not have a context on disk, and generally are kept inaccessible to confined domains. For example, if you create a new top-level directory, such as mydirectory/ , this directory may be labeled with the default_t type. If services need access to this directory, you need to update the file-contexts configuration for this location. See Section 4.7.2, "Persistent Changes: semanage fcontext" for details on adding a context to the file-context configuration. [4] Files in the /etc/selinux/targeted/contexts/files/ directory define contexts for files and directories. Files in this directory are read by the restorecon and setfiles utilities to restore files and directories to their default contexts.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-The_file_t_and_default_t_Types
Chapter 4. Managing direct connections to AD
Chapter 4. Managing direct connections to AD After you connect your Red Hat Enterprise Linux (RHEL) system to an Active Directory (AD) domain using System Security Services Daemon (SSSD) or Samba Winbind, you can manage key settings such as Kerberos renewals, domain membership, user access permissions, and Group Policy Objects (GPOs). Prerequisites You have connected your RHEL system to the Active Directory domain, either with SSSD or Samba Winbind. 4.1. Modifying the default Kerberos host keytab renewal interval SSSD automatically renews the Kerberos host keytab file in an AD environment if the adcli package is installed. The daemon checks daily if the machine account password is older than the configured value and renews it if necessary. The default renewal interval is 30 days. To change the default, follow the steps in this procedure. Procedure Add the following parameter to the AD provider in your /etc/sssd/sssd.conf file: Restart SSSD: To disable the automatic Kerberos host keytab renewal, set ad_maximum_machine_account_password_age = 0 . Additional resources adcli(8) sssd.conf(5) SSSD service is failing with an error 'Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]: Preauthentication failed.' (Red Hat Knowledgebase) 4.2. Removing a RHEL system from an AD domain Follow this procedure to remove a Red Hat Enterprise Linux (RHEL) system that is integrated into Active Directory (AD) directly from the AD domain. Prerequisites You have used the System Security Services Daemon (SSSD) or Samba Winbind to connect your RHEL system to AD. Procedure Remove a system from an identity domain using the realm leave command. The command removes the domain configuration from SSSD and the local system. Note When a client leaves a domain, AD does not delete the account and only removes the local client configuration. To delete the AD account, run the command with the --remove option. Initially, an attempt is made to connect without credentials, but you are prompted for your user password if you do not have a valid Kerberos ticket. You must have rights to remove an account from Active Directory. Use the -U option with the realm leave command to specify a different user to remove a system from an identity domain. By default, the realm leave command is executed as the default administrator. For AD, the administrator account is called Administrator . If a different user was used to join to the domain, it might be required to perform the removal as that user. The command first attempts to connect without credentials, but it prompts for a password if required. Verification Verify the domain is no longer configured: Additional resources realm(8) man page on your system 4.3. Setting the domain resolution order in SSSD to resolve short AD user names By default, you must specify fully qualified usernames, like [email protected] and [email protected] , to resolve Active Directory (AD) users and groups on a RHEL host connected to AD with the SSSD service. This procedure sets the domain resolution order in the SSSD configuration so you can resolve AD users and groups using short names, like ad_username . This example configuration searches for users and groups in the following order: Active Directory (AD) child domain subdomain2.ad.example.com AD child domain subdomain1.ad.example.com AD root domain ad.example.com Prerequisites You have used the SSSD service to connect the RHEL host directly to AD. Procedure Open the /etc/sssd/sssd.conf file in a text editor. Set the domain_resolution_order option in the [sssd] section of the file. Save and close the file. Restart the SSSD service to load the new configuration settings. Verification Verify you can retrieve user information for a user from the first domain using only a short name. 4.4. Managing login permissions for domain users By default, domain-side access control is applied, which means that login policies for Active Directory (AD) users are defined in the AD domain itself. This default behavior can be overridden so that client-side access control is used. With client-side access control, login permission is defined by local policies only. If a domain applies client-side access control, you can use the realmd to configure basic allow or deny access rules for users from that domain. Note Access rules either allow or deny access to all services on the system. More specific access rules must be set on a specific system resource or in the domain. 4.4.1. Enabling access to users within a domain By default, login policies for Active Directory (AD) users are defined in the AD domain itself. You can override this default behavior and configure a RHEL host to enable access for users within an AD domain. Important It is not recommended to allow access to all by default while only denying it to specific users with realm permit -x . Instead, Red Hat recommends maintaining a default no access policy for all users and only grant access to selected users using realm permit. Prerequisites Your RHEL system is a member of the Active Directory domain. Procedure Grant access to all users: Grant access to specific users: Currently, you can only allow access to users in primary domains and not to users in trusted domains. This is due to the fact that user login must contain the domain name and SSSD cannot currently provide realmd with information about available child domains. Verification Use SSH to log in to the server as the [email protected] user: Use the ssh command a second time to access the same server, this time as the [email protected] user: Notice how the [email protected] user is denied access to the system. You have granted the permission to log in to the system to the [email protected] user only. All other users from that Active Directory domain are rejected because of the specified login policy. Note If you set use_fully_qualified_names to true in the sssd.conf file, all requests must use the fully qualified domain name. However, if you set use_fully_qualified_names to false, it is possible to use the fully-qualified name in the requests, but only the simplified version is displayed in the output. Additional resources realm(8) man page on your system 4.4.2. Denying access to users within a domain By default, login policies for Active Directory (AD) users are defined in the AD domain itself. You can override this default behavior and configure a RHEL host to deny access to users within an AD domain. Important It is safer to only allow access to specific users or groups than to deny access to some, while enabling it to everyone else. Therefore, it is not recommended to allow access to all by default while only denying it to specific users with realm permit -x . Instead, Red Hat recommends maintaining a default no access policy for all users and only grant access to selected users using realm permit. Prerequisites Your RHEL system is a member of the Active Directory domain. Procedure Deny access to all users within the domain: This command prevents realm accounts from logging into the local machine. Use realm permit to restrict login to specific accounts. Verify that the domain user's login-policy is set to deny-any-login : Deny access to specific users by using the -x option: Verification Use SSH to log in to the server as the [email protected] user. Note If you set use_fully_qualified_names to true in the sssd.conf file, all requests must use the fully qualified domain name. However, if you set use_fully_qualified_names to false, it is possible to use the fully-qualified name in the requests, but only the simplified version is displayed in the output. Additional resources realm(8) man page on your system 4.5. Applying Group Policy Object access control in RHEL A Group Policy Object (GPO) is a collection of access control settings stored in Microsoft Active Directory (AD) that can apply to computers and users in an AD environment. By specifying GPOs in AD, administrators can define login policies honored by both Windows clients and Red Hat Enterprise Linux (RHEL) hosts joined to AD. 4.5.1. How SSSD interprets GPO access control rules By default, SSSD retrieves Group Policy Objects (GPOs) from Active Directory (AD) domain controllers and evaluates them to determine if a user is allowed to log in to a particular RHEL host joined to AD. SSSD maps AD Windows Logon Rights to Pluggable Authentication Module (PAM) service names to enforce those permissions in a GNU/Linux environment. As an AD Administrator, you can limit the scope of GPO rules to specific users, groups, or hosts by listing them in a security filter . Limitations on filtering by hosts Older versions of SSSD do not evaluate hosts in AD GPO security filters. RHEL 8.3.0 or later: SSSD supports users, groups, and hosts in security filters. RHEL versions earlier than 8.3.0: SSSD ignores host entries and only supports users and groups in security filters. To ensure that SSSD applies GPO-based access control to a specific host, create a new Organizational Unit (OU) in the AD domain, move the system to the new OU, and then link the GPO to this OU. Limitations on filtering by groups SSSD currently does not support Active Directory's built-in groups, such as Administrators with Security Identifier (SID) S-1-5-32-544 . Red Hat recommends against using AD built-in groups in AD GPOs targeting RHEL hosts. Additional resources For a list of Windows GPO options and their corresponding SSSD options, see List of GPO settings that SSSD supports . 4.5.2. List of GPO settings that SSSD supports The following table shows the SSSD options that correspond to Active Directory GPO options as specified in the Group Policy Management Editor on Windows. Table 4.1. GPO access control options retrieved by SSSD GPO option Corresponding sssd.conf option Allow log on locally Deny log on locally ad_gpo_map_interactive Allow log on through Remote Desktop Services Deny log on through Remote Desktop Services ad_gpo_map_remote_interactive Access this computer from the network Deny access to this computer from the network ad_gpo_map_network Allow log on as a batch job Deny log on as a batch job ad_gpo_map_batch Allow log on as a service Deny log on as a service ad_gpo_map_service Additional resources sssd-ad(5) man page on your system 4.5.3. List of SSSD options to control GPO enforcement You can set the following SSSD options to limit the scope of GPO rules. The ad_gpo_access_control option You can set the ad_gpo_access_control option in the /etc/sssd/sssd.conf file to choose between three different modes in which GPO-based access control operates. Table 4.2. Table of ad_gpo_access_control values Value of ad_gpo_access_control Behavior enforcing GPO-based access control rules are evaluated and enforced. This is the default setting in RHEL 8. permissive GPO-based access control rules are evaluated but not enforced; a syslog message is recorded every time access would be denied. This is the default setting in RHEL 7. This mode is ideal for testing policy adjustments while allowing users to continue logging in. disabled GPO-based access control rules are neither evaluated nor enforced. The ad_gpo_implicit_deny option The ad_gpo_implicit_deny option is set to False by default. In this default state, users are allowed access if applicable GPOs are not found. If you set this option to True , you must explicitly allow users access with a GPO rule. You can use this feature to harden security, but be careful not to deny access unintentionally. Red Hat recommends testing this feature while ad_gpo_access_control is set to permissive . The following two tables illustrate when a user is allowed or rejected access based on the allow and deny login rights defined on the AD server-side and the value of ad_gpo_implicit_deny . Table 4.3. Login behavior with ad_gpo_implicit_deny set to False (default) allow-rules deny-rules result missing missing all users are allowed missing present only users not in deny-rules are allowed present missing only users in allow-rules are allowed present present only users in allow-rules and not in deny-rules are allowed Table 4.4. Login behavior with ad_gpo_implicit_deny set to True allow-rules deny-rules result missing missing no users are allowed missing present no users are allowed present missing only users in allow-rules are allowed present present only users in allow-rules and not in deny-rules are allowed Additional resources Changing the GPO access control mode sssd-ad(5) man page on your system 4.5.4. Changing the GPO access control mode This procedure changes how GPO-based access control rules are evaluated and enforced on a RHEL host joined to an Active Directory (AD) environment. In this example, you will change the GPO operation mode from enforcing (the default) to permissive for testing purposes. Important If you see the following errors, Active Directory users are unable to log in due to GPO-based access controls: In /var/log/secure : In /var/log/sssd/sssd__example.com_.log : If this is undesired behavior, you can temporarily set ad_gpo_access_control to permissive as described in this procedure while you troubleshoot proper GPO settings in AD. Prerequisites You have joined a RHEL host to an AD environment using SSSD. Editing the /etc/sssd/sssd.conf configuration file requires root permissions. Procedure Stop the SSSD service. Open the /etc/sssd/sssd.conf file in a text editor. Set ad_gpo_access_control to permissive in the domain section for the AD domain. Save the /etc/sssd/sssd.conf file. Restart the SSSD service to load configuration changes. Additional resources List of SSSD options to control GPO enforcement 4.5.5. Creating and configuring a GPO for a RHEL host in the AD GUI A Group Policy Object (GPO) is a collection of access control settings stored in Microsoft Active Directory (AD) that can apply to computers and users in an AD environment. The following procedure creates a GPO in the AD graphical user interface (GUI) to control logon access to a RHEL host that is integrated directly to the AD domain. Prerequisites You have joined a RHEL host to an AD environment using SSSD. You have AD Administrator privileges to make changes in AD using the GUI. Procedure Within Active Directory Users and Computers, create an Organizational Unit (OU) to associate with the new GPO: Right-click the domain. Choose New . Choose Organizational Unit . Click the name of the Computer Object that represents the RHEL host (created when it joined Active Directory) and drag it into the new OU. By having the RHEL host in its own OU, the GPO targets this host. Within the Group Policy Management Editor, create a new GPO for the OU you created: Expand Forest . Expand Domains . Expand your domain. Right-click the new OU. Choose Create a GPO in this domain . Specify a name for the new GPO, such as Allow SSH access or Allow Console/GUI access and click OK . Edit the new GPO: Select the OU within the Group Policy Management Editor. Right-click and choose Edit . Select User Rights Assignment . Select Computer Configuration . Select Policies . Select Windows Settings . Select Security Settings . Select Local Policies . Select User Rights Assignment . Assign login permissions: Double-Click Allow log on locally to grant local console/GUI access. Double-click Allow log on through Remote Desktop Services to grant SSH access. Add the user(s) you want to access either of these policies to the policies themselves: Click Add User or Group . Enter the username within the blank field. Click OK . Additional resources Group Policy Objects in Microsoft documentation 4.5.6. Additional resources Connecting RHEL systems directly to AD using SSSD
[ "ad_maximum_machine_account_password_age = value_in_days", "systemctl restart sssd", "realm leave ad.example.com", "realm leave [ ad.example.com ] -U [ AD.EXAMPLE.COM\\user ]'", "realm discover [ ad.example.com ] ad.example.com type: kerberos realm-name: EXAMPLE.COM domain-name: example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools", "domain_resolution_order = subdomain2.ad.example.com, subdomain1.ad.example.com, ad.example.com", "systemctl restart sssd", "id <user_from_subdomain2> uid=1916901142(user_from_subdomain2) gid=1916900513(domain users) groups=1916900513(domain users)", "realm permit --all", "realm permit [email protected] realm permit 'AD.EXAMPLE.COM\\aduser01'", "ssh [email protected]@ server_name [[email protected]@ server_name ~]USD", "ssh [email protected]@ server_name Authentication failed.", "realm deny --all", "realm list example.net type: kerberos realm-name: EXAMPLE.NET domain-name: example.net configured: kerberos-member server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools login-formats: %[email protected] login-policy: deny-any-login", "realm permit -x 'AD.EXAMPLE.COM\\aduser02'", "ssh [email protected]@ server_name Authentication failed.", "Oct 31 03:00:13 client1 sshd[124914]: pam_sss(sshd:account): Access denied for user aduser1: 6 (Permission denied) Oct 31 03:00:13 client1 sshd[124914]: Failed password for aduser1 from 127.0.0.1 port 60509 ssh2 Oct 31 03:00:13 client1 sshd[124914]: fatal: Access denied for user aduser1 by PAM account configuration [preauth]", "(Sat Oct 31 03:00:13 2020) [sssd[be[example.com]]] [ad_gpo_perform_hbac_processing] (0x0040): GPO access check failed: [1432158236](Host Access Denied) (Sat Oct 31 03:00:13 2020) [sssd[be[example.com]]] [ad_gpo_cse_done] (0x0040): HBAC processing failed: [1432158236](Host Access Denied} (Sat Oct 31 03:00:13 2020) [sssd[be[example.com]]] [ad_gpo_access_done] (0x0040): GPO-based access control failed.", "systemctl stop sssd", "[domain/ example.com ] ad_gpo_access_control= permissive", "systemctl restart sssd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/integrating_rhel_systems_directly_with_windows_active_directory/managing-direct-connections-to-ad_integrating-rhel-systems-directly-with-active-directory
Chapter 1. Get Started Developing Applications
Chapter 1. Get Started Developing Applications 1.1. About Jakarta EE 1.1.1. Jakarta EE 8 JBoss EAP 7 is a Jakarta EE 8-compatible implementation for both Jakarta EE Web Profile and Jakarta EE Platform specifications. For information about Jakarta EE 8, see About Jakarta EE . 1.1.2. Overview of Jakarta EE Profiles Jakarta EE defines different profiles. Each profile is a subset of APIs that represent configurations that are suited to specific classes of applications. Jakarta EE 8 defines specifications for the Web Profile and the Platform profiles. A product can choose to implement the Platform, the Web Profile, or one or more custom profiles, in any combination. Jakarta EE Web Profile includes a selected subset of APIs that are designed to be useful for web application development. Jakarta EE Platform profile includes the APIs defined by the Jakarta EE 8 Web Profile, plus the complete set of Jakarta EE 8 APIs that are useful for enterprise application development. JBoss EAP 7.4 is a Jakarta EE 8 compatible implementation for Web Profile and Full Platform specifications. See Jakarta EE Specifications for the complete list of Jakarta EE 8 APIs. 1.2. Setting Up the Development Environment Download and install Red Hat CodeReady Studio. For instructions, see Installing CodeReady Studio stand-alone using the Installer in the Red Hat CodeReady Studio Installation Guide . Set up the JBoss EAP server in Red Hat CodeReady Studio. For instructions, see Downloading, Installing, and Setting Up JBoss EAP from within the IDE in the Getting Started with CodeReady Studio Tools guide. 1.3. Configure Annotation Processing in Red Hat CodeReady Studio Annotation Processing (AP) is turned off by default in Eclipse. If your project generates implementation classes, this can result in java.lang.ExceptionInInitializerError exceptions, followed by CLASS_NAME (implementation not found) error messages when you deploy your project. You can resolve these issues in one of the following ways. You can enable annotation processing for the individual project or you can enable annotation processing globally for all Red Hat CodeReady Studio projects . Enable Annotation Processing for an Individual Project To enable annotation processing for a specific project, you must add the m2e.apt.activation property with a value of jdt_apt to the project's pom.xml file. <properties> <m2e.apt.activation>jdt_apt</m2e.apt.activation> </properties> You can find examples of this technique in the pom.xml files for the logging-tools and kitchensink-ml quickstarts that ship with JBoss EAP. Enable Annotation Processing Globally in Red Hat CodeReady Studio Select Window Preferences . Expand Maven , and select Annotation Processing . Under Select Annotation Processing Mode , select Automatically configure JDT APT (builds faster , but outcome may differ from Maven builds) , then click Apply and Close . 1.4. Configure the Default Welcome Web Application JBoss EAP includes a default Welcome application, which displays at the root context on port 8080 by default. This default Welcome application can be replaced with your own web application. This can be configured in one of two ways: Change the welcome-content file handler Change the default-web-module You can also disable the welcome content . Change the welcome-content File Handler Modify the existing welcome-content file handler's path to point to the new deployment. Note Alternatively, you could create a different file handler to be used by the server's root. Reload the server for the changes to take effect. Change the default-web-module Map a deployed web application to the server's root. Reload the server for the changes to take effect. Disable the Default Welcome Web Application Disable the welcome application by removing the location entry / for the default-host . Reload the server for the changes to take effect.
[ "<properties> <m2e.apt.activation>jdt_apt</m2e.apt.activation> </properties>", "/subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=path,value=\" /path/to/content \")", "/subsystem=undertow/configuration=handler/file= NEW_FILE_HANDLER :add(path=\" /path/to/content \") /subsystem=undertow/server=default-server/host=default-host/location=\\/:write-attribute(name=handler,value= NEW_FILE_HANDLER )", "reload", "/subsystem=undertow/server=default-server/host=default-host:write-attribute(name=default-web-module,value=hello.war)", "reload", "/subsystem=undertow/server=default-server/host=default-host/location=\\/:remove", "reload" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/get_started_developing_applications
Red Hat build of OpenTelemetry
Red Hat build of OpenTelemetry OpenShift Container Platform 4.13 Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-opentelemetry-operator", "oc new-project <project_of_opentelemetry_collector_instance>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF", "oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml", "oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]", "receivers:", "processors:", "exporters:", "connectors:", "extensions:", "service: pipelines:", "service: pipelines: traces: receivers:", "service: pipelines: traces: processors:", "service: pipelines: traces: exporters:", "service: pipelines: metrics: receivers:", "service: pipelines: metrics: processors:", "service: pipelines: metrics: exporters:", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator", "config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]", "config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]", "config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]", "config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]", "config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2", "config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]", "config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]", "config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]", "apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default", "config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]", "config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev", "apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch", "serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]", "config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]", "config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]", "kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]", "config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false", "config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int", "config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete", "config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2", "config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1", "config: processors: span/set_status: status: code: Error description: \"<error_description>\"", "kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']", "config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME", "config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3", "config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250", "config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"", "config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>", "config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>", "config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)", "config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]", "config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]", "config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]", "config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317", "config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]", "config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]", "config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]", "config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5", "config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7", "config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7", "config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9", "config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]", "config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'", "config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3", "config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]", "config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]", "config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]", "config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]", "config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]", "config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]", "config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]", "config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]", "{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }", "config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]", "config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]", "config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]", "oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5", "apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt", "instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"", "instrumentation.opentelemetry.io/inject-dotnet: \"true\"", "instrumentation.opentelemetry.io/inject-go: \"true\"", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny", "oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>", "instrumentation.opentelemetry.io/inject-java: \"true\"", "instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"", "instrumentation.opentelemetry.io/inject-python: \"true\"", "instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"", "instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]", "apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs", "oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1", "config: service: telemetry: logs: level: debug 1", "config: service: telemetry: metrics: address: \":8888\" 1", "oc port-forward <collector_pod>", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true", "config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug]", "oc get instrumentation -n <workload_project> 1", "oc get events -n <workload_project> 1", "... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation", "oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow", "instrumentation.opentelemetry.io/inject-python=\"true\"", "oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations[\"instrumentation.opentelemetry.io/inject-python\"]==\"true\")]}{.metadata.name}{\"\\n\"}{end}'", "instrumentation.opentelemetry.io/inject-nodejs: \"<instrumentation_object>\"", "instrumentation.opentelemetry.io/inject-nodejs: \"<other_namespace>/<instrumentation_object>\"", "oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'", "oc logs <application_pod> -n <workload_project>", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1", "oc login --username=<your_username>", "oc get deployments -n <project_of_opentelemetry_instance>", "oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>", "oc get deployments -n <project_of_opentelemetry_instance>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/red_hat_build_of_opentelemetry/index
function::sock_state_str2num
function::sock_state_str2num Name function::sock_state_str2num - Given a socket state string, return the corresponding state number. Synopsis Arguments state The state name.
[ "function sock_state_str2num:long(state:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sock-state-str2num
Chapter 4. OADP Application backup and restore
Chapter 4. OADP Application backup and restore 4.1. Introduction to OpenShift API for Data Protection The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). However, OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators. 4.1.1. OpenShift API for Data Protection APIs OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources. OADP provides the following APIs: Backup Restore Schedule BackupStorageLocation VolumeSnapshotLocation Additional resources Backing up etcd 4.2. OADP release notes The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. 4.2.1. OADP 1.2.3 release notes 4.2.1.1. New features There are no new features in the release of OpenShift API for Data Protection (OADP) 1.2.3. 4.2.1.2. Resolved issues The following highlighted issues are resolved in OADP 1.2.3: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) In releases of OADP 1.2, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption. For a list of all OADP issues associated with this CVE, see the following Jira list . For more information, see CVE-2023-39325 (Rapid Reset Attack) . For a complete list of all issues resolved in the release of OADP 1.2.3, see the list of OADP 1.2.3 resolved issues in Jira. 4.2.1.3. Known issues There are no known issues in the release of OADP 1.2.3. 4.2.2. OADP 1.2.2 release notes 4.2.2.1. New features There are no new features in the release of OpenShift API for Data Protection (OADP) 1.2.2. 4.2.2.2. Resolved issues The following highlighted issues are resolved in OADP 1.2.2: Restic restore partially failed due to a Pod Security standard In releases of OADP 1.2, OpenShift Container Platform 4.14 enforced a pod security admission (PSA) policy that hindered the readiness of pods during a Restic restore process. This issue has been resolved in the release of OADP 1.2.2, and also OADP 1.1.6. Therefore, it is recommended that users upgrade to these releases. For more information, see Restic restore partially failing on OCP 4.14 due to changed PSA policy . (OADP-2094) Backup of an app with internal images partially failed with plugin panicked error In releases of OADP 1.2, the backup of an application with internal images partially failed with plugin panicked error returned. The backup partially fails with this error in the Velero logs: time="2022-11-23T15:40:46Z" level=info msg="1 errors encountered backup up item" backup=openshift-adp/django-persistent-67a5b83d-6b44-11ed-9cba-902e163f806c logSource="/remote-source/velero/app/pkg/backup/backup.go:413" name=django-psql-persistent time="2022-11-23T15:40:46Z" level=error msg="Error backing up item" backup=openshift-adp/django-persistent-67a5b83d-6b44-11ed-9cba-902e163f8 This issue has been resolved in OADP 1.2.2. (OADP-1057) . ACM cluster restore was not functioning as expected due to restore order In releases of OADP 1.2, ACM cluster restore was not functioning as expected due to restore order. ACM applications were removed and re-created on managed clusters after restore activation. (OADP-2505) VM's using filesystemOverhead failed when backing up and restoring due to volume size mismatch In releases of OADP 1.2, due to storage provider implementation choices, whenever there was a difference between the application persistent volume claims (PVCs) storage request and the snapshot size of the same PVC, VM's using filesystemOverhead failed when backing up and restoring. This issue has been resolved in the Data Mover of OADP 1.2.2. (OADP-2144) OADP did not contain an option to set VolSync replication source prune interval In releases of OADP 1.2, there was no option to set the VolSync replication source pruneInterval . (OADP-2052) Possible pod volume backup failure if Velero was installed in multiple namespaces In releases of OADP 1.2, there was a possibility of pod volume backup failure if Velero was installed in multiple namespaces. (OADP-2409) Backup Storage Locations moved to unavailable phase when VSL uses custom secret In releases of OADP 1.2, Backup Storage Locations moved to unavailable phase when Volume Snapshot Location used custom secret. (OADP-1737) For a complete list of all issues resolved in the release of OADP 1.2.2, see the list of OADP 1.2.2 resolved issues in Jira. 4.2.2.3. Known issues The following issues have been highlighted as known issues in the release of OADP 1.2.2: Must-gather command fails to remove ClusterRoleBinding resources The oc adm must-gather command fails to remove ClusterRoleBinding resources, which are left on cluster due to admission webhook. Therefore, requests for the removal of the ClusterRoleBinding resources are denied. (OADP-27730) admission webhook "clusterrolebindings-validation.managed.openshift.io" denied the request: Deleting ClusterRoleBinding must-gather-p7vwj is not allowed For a complete list of all known issues in this release, see the list of OADP 1.2.2 known issues in Jira. 4.2.3. OADP 1.2.1 release notes 4.2.3.1. New features There are no new features in the release of OpenShift API for Data Protection (OADP) 1.2.1. 4.2.3.2. Resolved issues For a complete list of all issues resolved in the release of OADP 1.2.1, see the list of OADP 1.2.1 resolved issues in Jira. 4.2.3.3. Known issues The following issues have been highlighted as known issues in the release of OADP 1.2.1: DataMover Restic retain and prune policies do not work as expected The retention and prune features provided by VolSync and Restic are not working as expected. Because there is no working option to set the prune interval on VolSync replication, you have to manage and prune remotely stored backups on S3 storage outside of OADP. For more details, see: OADP-2052 OADP-2048 OADP-2175 OADP-1690 Important OADP Data Mover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For a complete list of all known issues in this release, see the list of OADP 1.2.1 known issues in Jira. 4.2.4. OADP 1.2.0 release notes The OADP 1.2.0 release notes include information about new features, bug fixes, and known issues. 4.2.4.1. New features Resource timeouts The new resourceTimeout option specifies the timeout duration in minutes for waiting on various Velero resources. This option applies to resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default duration is 10 minutes. AWS S3 compatible backup storage providers You can back up objects and snapshots on AWS S3 compatible providers. For more details, see Configuring Amazon Web Services . 4.2.4.1.1. Technical preview features Data Mover The OADP Data Mover enables you to back up Container Storage Interface (CSI) volume snapshots to a remote object store. When you enable Data Mover, you can restore stateful applications using CSI volume snapshots pulled from the object store in case of accidental cluster deletion, cluster failure, or data corruption. For more information, see Using Data Mover for CSI snapshots . Important OADP Data Mover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.2.4.2. Resolved issues For a complete list of all issues resolved in this release, see the list of OADP 1.2.0 resolved issues in Jira. 4.2.4.3. Known issues The following issues have been highlighted as known issues in the release of OADP 1.2.0: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) The HTTP/2 protocol is susceptible to a denial of service attack because request cancellation can reset multiple streams quickly. The server has to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This results in a denial of service due to server resource consumption. For a list of all OADP issues associated with this CVE, see the following Jira list . It is advised to upgrade to OADP 1.2.3, which resolves this issue. For more information, see CVE-2023-39325 (Rapid Reset Attack) . 4.2.5. OADP 1.1.7 release notes The OADP 1.1.7 release notes lists any resolved issues and known issues. 4.2.5.1. Resolved issues The following highlighted issues are resolved in OADP 1.1.7: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) In releases of OADP 1.1, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption. For a list of all OADP issues associated with this CVE, see the following Jira list . For more information, see CVE-2023-39325 (Rapid Reset Attack) . For a complete list of all issues resolved in the release of OADP 1.1.7, see the list of OADP 1.1.7 resolved issues in Jira. 4.2.5.2. Known issues There are no known issues in the release of OADP 1.1.7. 4.2.6. OADP 1.1.6 release notes The OADP 1.1.6 release notes lists any new features, resolved issues and bugs, and known issues. 4.2.6.1. Resolved issues Restic restore partially failing due to Pod Security standard OCP 4.14 introduced pod security standards that meant the privileged profile is enforced . In releases of OADP, this profile caused the pod to receive permission denied errors. This issue was caused because of the restore order. The pod was created before the security context constraints (SCC) resource. As this pod violated the pod security standard, the pod was denied and subsequently failed. OADP-2420 Restore partially failing for job resource In releases of OADP, the restore of job resource was partially failing in OCP 4.14. This issue was not seen in older OCP versions. The issue was caused by an additional label being to the job resource, which was not present in older OCP versions. OADP-2530 For a complete list of all issues resolved in this release, see the list of OADP 1.1.6 resolved issues in Jira. 4.2.6.2. Known issues For a complete list of all known issues in this release, see the list of OADP 1.1.6 known issues in Jira. 4.2.7. OADP 1.1.5 release notes The OADP 1.1.5 release notes lists any new features, resolved issues and bugs, and known issues. 4.2.7.1. New features This version of OADP is a service release. No new features are added to this version. 4.2.7.2. Resolved issues For a complete list of all issues resolved in this release, see the list of OADP 1.1.5 resolved issues in Jira. 4.2.7.3. Known issues For a complete list of all known issues in this release, see the list of OADP 1.1.5 known issues in Jira. 4.2.8. OADP 1.1.4 release notes The OADP 1.1.4 release notes lists any new features, resolved issues and bugs, and known issues. 4.2.8.1. New features This version of OADP is a service release. No new features are added to this version. 4.2.8.2. Resolved issues Add support for all the velero deployment server arguments In releases of OADP, OADP did not facilitate the support of all the upstream Velero server arguments. This issue has been resolved in OADP 1.1.4 and all the upstream Velero server arguments are supported. OADP-1557 Data Mover can restore from an incorrect snapshot when there was more than one VSR for the restore name and pvc name In releases of OADP, OADP Data Mover could restore from an incorrect snapshot if there was more than one Volume Snapshot Restore (VSR) resource in the cluster for the same Velero restore name and PersistentVolumeClaim (pvc) name. OADP-1822 Cloud Storage API BSLs need OwnerReference In releases of OADP, ACM BackupSchedules failed validation because of a missing OwnerReference on Backup Storage Locations (BSLs) created with dpa.spec.backupLocations.bucket . OADP-1511 For a complete list of all issues resolved in this release, see the list of OADP 1.1.4 resolved issues in Jira. 4.2.8.3. Known issues This release has the following known issues: OADP backups might fail because a UID/GID range might have changed on the cluster OADP backups might fail because a UID/GID range might have changed on the cluster where the application has been restored, with the result that OADP does not back up and restore OpenShift Container Platform UID/GID range metadata. To avoid the issue, if the backed application requires a specific UUID, ensure the range is available when restored. An additional workaround is to allow OADP to create the namespace in the restore operation. A restoration might fail if ArgoCD is used during the process due to a label used by ArgoCD A restoration might fail if ArgoCD is used during the process due to a label used by ArgoCD, app.kubernetes.io/instance . This label identifies which resources ArgoCD needs to manage, which can create a conflict with OADP's procedure for managing resources on restoration. To work around this issue, set .spec.resourceTrackingMethod on the ArgoCD YAML to annotation+label or annotation . If the issue continues to persist, then disable ArgoCD before beginning to restore, and enable it again when restoration is finished. OADP Velero plugins returning "received EOF, stopping recv loop" message Velero plugins are started as separate processes. When the Velero operation has completed, either successfully or not, they exit. Therefore if you see a received EOF, stopping recv loop messages in debug logs, it does not mean an error occurred. The message indicates that a plugin operation has completed. OADP-2176 For a complete list of all known issues in this release, see the list of OADP 1.1.4 known issues in Jira. 4.2.9. OADP 1.1.3 release notes The OADP 1.1.3 release notes lists any new features, resolved issues and bugs, and known issues. 4.2.9.1. New features This version of OADP is a service release. No new features are added to this version. 4.2.9.2. Resolved issues For a complete list of all issues resolved in this release, see the list of OADP 1.1.3 resolved issues in Jira. 4.2.9.3. Known issues For a complete list of all known issues in this release, see the list of OADP 1.1.3 known issues in Jira. 4.2.10. OADP 1.1.2 release notes The OADP 1.1.2 release notes include product recommendations, a list of fixed bugs and descriptions of known issues. 4.2.10.1. Product recommendations VolSync To prepare for the upgrade from VolSync 0.5.1 to the latest version available from the VolSync stable channel, you must add this annotation in the openshift-adp namespace by running the following command: USD oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers='true' Velero In this release, Velero has been upgraded from version 1.9.2 to version 1.9.5 . Restic In this release, Restic has been upgraded from version 0.13.1 to version 0.14.0 . 4.2.10.2. Resolved issues The following issues have been resolved in this release: OADP-1150 OADP-290 OADP-1056 4.2.10.3. Known issues This release has the following known issues: OADP currently does not support backup and restore of AWS EFS volumes using restic in Velero ( OADP-778 ). CSI backups might fail due to a Ceph limitation of VolumeSnapshotContent snapshots per PVC. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots: For CephFS, you can create up to 100 snapshots per PVC. ( OADP-804 ) For RADOS Block Device (RBD), you can create up to 512 snapshots for each PVC. ( OADP-975 ) For more information, see Volume Snapshots . 4.2.11. OADP 1.1.1 release notes The OADP 1.1.1 release notes include product recommendations and descriptions of known issues. 4.2.11.1. Product recommendations Before you install OADP 1.1.1, it is recommended to either install VolSync 0.5.1 or to upgrade to it. 4.2.11.2. Known issues This release has the following known issues: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) The HTTP/2 protocol is susceptible to a denial of service attack because request cancellation can reset multiple streams quickly. The server has to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This results in a denial of service due to server resource consumption. For a list of all OADP issues associated with this CVE, see the following Jira list . It is advised to upgrade to OADP 1.1.7 or 1.2.3, which resolve this issue. For more information, see CVE-2023-39325 (Rapid Reset Attack) . OADP currently does not support backup and restore of AWS EFS volumes using restic in Velero ( OADP-778 ). CSI backups might fail due to a Ceph limitation of VolumeSnapshotContent snapshots per PVC. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots: For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots for each PVC. ( OADP-804 ) and ( OADP-975 ) For more information, see Volume Snapshots . 4.3. OADP features and plugins OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications. The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources. 4.3.1. OADP features OpenShift API for Data Protection (OADP) supports the following features: Backup You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label. OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Restore You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Schedule You can schedule backups at specified intervals. Hooks You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container. 4.3.2. OADP plugins The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins. OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots. Table 4.1. OADP plugins OADP plugin Function Storage location aws Backs up and restores Kubernetes objects. AWS S3 Backs up and restores volumes with snapshots. AWS EBS azure Backs up and restores Kubernetes objects. Microsoft Azure Blob storage Backs up and restores volumes with snapshots. Microsoft Azure Managed Disks gcp Backs up and restores Kubernetes objects. Google Cloud Storage Backs up and restores volumes with snapshots. Google Compute Engine Disks openshift Backs up and restores OpenShift Container Platform resources. [1] Object store kubevirt Backs up and restores OpenShift Virtualization resources. [2] Object store csi Backs up and restores volumes with CSI snapshots. [3] Cloud storage that supports CSI snapshots Mandatory. Virtual machine disks are backed up with CSI snapshots or Restic. The csi plugin uses the Velero CSI beta snapshot API . 4.3.3. About OADP Velero plugins You can configure two types of plugins when you install Velero: Default cloud provider plugins Custom plugins Both types of plugin are optional, but most users configure at least one cloud provider plugin. 4.3.3.1. Default Velero cloud provider plugins You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment: aws (Amazon Web Services) gcp (Google Cloud Platform) azure (Microsoft Azure) openshift (OpenShift Velero plugin) csi (Container Storage Interface) kubevirt (KubeVirt) You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the openshift , aws , azure , and gcp plugins: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp 4.3.3.2. Custom Velero plugins You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment. You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the default openshift , azure , and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin : apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin 4.3.3.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. 4.3.4. Supported architectures for OADP OpenShift API for Data Protection (OADP) supports the following architectures: AMD64 ARM64 PPC64le s390x Note OADP 1.2.0 and later versions support the ARM64 architecture. 4.3.5. OADP support for IBM Power and IBM Z OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power and to IBM Z. OADP 1.1.0 was tested successfully against OpenShift Container Platform 4.11 for both IBM Power and IBM Z. The sections that follow give testing and support information for OADP 1.1.0 in terms of backup locations for these systems. 4.3.5.1. OADP support for target backup locations using IBM Power IBM Power running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.2 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.2 against all non-AWS S3 backup location targets as well. 4.3.5.2. OADP testing and support for target backup locations using IBM Z IBM Z running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.2 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.2 against all non-AWS S3 backup location targets as well. 4.4. Installing and configuring OADP 4.4.1. About installing OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.11 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the MTC Operator and are not available as a standalone Operator. To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types: Amazon Web Services Microsoft Azure Google Cloud Platform Multicloud Object Gateway AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can back up persistent volumes (PVs) by using snapshots or Restic. To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Restic backups on object storage. You create a default Secret and then you install the Data Protection Application. 4.4.1.1. AWS S3 compatible backup storage providers OADP is compatible with many object storage providers for use with different backup and snapshot operations. Several object storage providers are fully supported, several are unsupported but known to work, and some have known limitations. 4.4.1.1.1. Supported backup storage providers The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations: MinIO Multicloud Object Gateway (MCG) Amazon Web Services (AWS) S3 Note The following compatible object storage providers are supported and have their own Velero object store plugins: Google Cloud Platform (GCP) Microsoft Azure 4.4.1.1.2. Unsupported backup storage providers The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat: IBM Cloud Oracle Cloud DigitalOcean NooBaa, unless installed using Multicloud Object Gateway (MCG) Tencent Cloud Ceph RADOS v12.2.7 Quobyte Cloudian HyperStore Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . 4.4.1.1.3. Backup storage providers with known limitations The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set: Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore. 4.4.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store. Warning Failure to configure MCG as an external object store might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud . Additional resources Overview of backup and snapshot locations in the Velero documentation 4.4.1.3. About OADP update channels When you install an OADP Operator, you choose an update channel . This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time. The following update channels are available: The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP ClusterServiceVersion for oadp.v1.1.z and older versions from oadp.v1.0.z . The stable-1.0 channel contains oadp.v1.0. z , the most recent OADP 1.0 ClusterServiceVersion . The stable-1.1 channel contains oadp.v1.1. z , the most recent OADP 1.1 ClusterServiceVersion . The stable-1.2 channel contains oadp.v1.2. z , the most recent OADP 1.2 ClusterServiceVersion . The stable-1.3 channel contains oadp.v1.3. z , the most recent OADP 1.3 ClusterServiceVersion . Which update channel is right for you? The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from oadp.v1.1. z . Choose the stable-1. y update channel to install OADP 1. y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1. y . z . When must you switch update channels? If you have OADP 1. y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1. y update channel. You will then receive all z-stream patches for version 1. y . z . If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1. z . If you have OADP 1. y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0. z . Note You cannot switch from OADP 1. y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it. 4.4.1.4. Installation of OADP on multiple namespaces You can install OADP into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with Restic and CSI. You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements: All deployments of OADP on the same cluster must be the same version, for example, 1.1.4. Installing different versions of OADP on the same cluster is not supported. Each individual deployment of OADP must have a unique set of credentials and a unique BackupStorageLocation configuration. By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to review security and RBAC settings carefully and make any necessary changes to them to ensure that each OADP instance has the correct permissions. Additional resources Cluster service version 4.4.1.5. Velero CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources. 4.4.1.5.1. CPU and memory requirement for configurations Configuration types [1] Average usage [2] Large usage resourceTimeouts CSI Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi N/A Restic [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi 900m [5] DataMover N/A N/A 10m - average usage 60m - large usage Average usage - use these settings for most usage situations. Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets. Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to utilize large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations. Increasing the CPU has a significant impact on improving backup and restore times. DataMover - DataMover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m. Note The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above. 4.4.1.5.2. NodeAgent CPU for large usage Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP). Important It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia's aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications. You can set these limits in Ceph MDS pods by following the procedure in Changing the CPU and memory resources on the rook-ceph pods . You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits: resources: mds: limits: cpu: "3" memory: 128Gi requests: cpu: "3" memory: 8Gi 4.4.2. Installing the OADP Operator You can install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.11 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.11 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.4.2.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.1.0 1.9 4.9 and later 1.1.1 1.9 4.9 and later 1.1.2 1.9 4.9 and later 1.1.3 1.9 4.9 and later 1.1.4 1.9 4.9 and later 1.1.5 1.9 4.9 and later 1.1.6 1.9 4.11 and later 1.1.7 1.9 4.11 and later 1.2.0 1.11 4.11 and later 1.2.1 1.11 4.11 and later 1.2.2 1.11 4.11 and later 1.2.3 1.11 4.11 and later 4.4.3. Configuring the OpenShift API for Data Protection with Amazon Web Services You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.11 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the MTC Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.4.3.1. Configuring Amazon Web Services You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the AWS CLI installed. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application. 4.4.3.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify S3-compatible object storage, such as Multicloud Object Gateway or MinIO, as a backup location. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use Restic, you do not need to specify a snapshot location because Restic backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.4.3.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.4.3.2.2. Creating profiles for different credentials If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file. Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR). Procedure Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example: [backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create a Secret object with the credentials-velero file: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1 Add the profiles to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: "backupStorage" credential: key: cloud name: cloud-credentials snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: "volumeSnapshot" 4.4.3.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.4.3.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. 4.4.3.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.4.3.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials , which contains separate profiles for the backup and snapshot location credentials. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Note Velero creates a secret named velero-repo-credentials in the OADP namespace, which contains a default backup repository password. You can update the secret with your own password encoded as base64 before you run your first backup targeted to the backup repository. The value of the key to update is Data[repository-password] . After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is velero-repo-credentials , which contains either the default password or the one you replaced it with. If you update the secret password after the first backup, the new password will not match the password in velero-repo-credentials , and therefore, Velero will not be able to connect with the older backups. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift 1 - aws resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 5 prefix: <prefix> 6 config: region: <region> profile: "default" credential: key: cloud name: cloud-credentials 7 snapshotLocations: 8 - name: default velero: provider: aws config: region: <region> 9 profile: "default" 1 The openshift plugin is mandatory. 2 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 3 Set this value to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding spec.defaultVolumesToFsBackup: true to the Backup CR. In OADP version 1.1, add spec.defaultVolumesToRestic: true to the Backup CR. 4 Specify on which nodes Restic is available. By default, Restic runs on all nodes. 5 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 6 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 7 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 8 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs. 9 The snapshot location must be in the same region as the PVs. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.4.3.4.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.4.4. Configuring the OpenShift API for Data Protection with Microsoft Azure You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator. The Operator installs Velero 1.11 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the MTC Operator and are not available as a standalone Operator. You configure Azure for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.4.4.1. Configuring Microsoft Azure You configure a Microsoft Azure for the OpenShift API for Data Protection (OADP). Prerequisites You must have the Azure CLI installed. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Obtain the storage account access key: USD AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list \ --account-name USDAZURE_STORAGE_ACCOUNT_ID \ --query "[?keyName == 'key1'].value" -o tsv` Create a custom role that has the minimum required permissions: AZURE_ROLE=Velero az role definition create --role-definition '{ "Name": "'USDAZURE_ROLE'", "Description": "Velero related permissions to perform backups, restores and deletions", "Actions": [ "Microsoft.Compute/disks/read", "Microsoft.Compute/disks/write", "Microsoft.Compute/disks/endGetAccess/action", "Microsoft.Compute/disks/beginGetAccess/action", "Microsoft.Compute/snapshots/read", "Microsoft.Compute/snapshots/write", "Microsoft.Compute/snapshots/delete", "Microsoft.Storage/storageAccounts/listkeys/action", "Microsoft.Storage/storageAccounts/regeneratekey/action" ], "AssignableScopes": ["/subscriptions/'USDAZURE_SUBSCRIPTION_ID'"] }' Create a credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_STORAGE_ACCOUNT_ACCESS_KEY=USD{AZURE_STORAGE_ACCOUNT_ACCESS_KEY} 1 AZURE_CLOUD_NAME=AzurePublicCloud EOF 1 Mandatory. You cannot back up internal images if the credentials-velero file contains only the service principal credentials. You use the credentials-velero file to create a Secret object for Azure before you install the Data Protection Application. 4.4.4.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify S3-compatible object storage, such as Multicloud Object Gateway or MinIO, as a backup location. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use Restic, you do not need to specify a snapshot location because Restic backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.4.4.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-azure . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.4.4.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-azure . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure 1 Backup location Secret with custom name. 4.4.4.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.4.4.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. 4.4.4.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.4.4.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials-azure , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Note Velero creates a secret named velero-repo-credentials in the OADP namespace, which contains a default backup repository password. You can update the secret with your own password encoded as base64 before you run your first backup targeted to the backup repository. The value of the key to update is Data[repository-password] . After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is velero-repo-credentials , which contains either the default password or the one you replaced it with. If you update the secret password after the first backup, the new password will not match the password in velero-repo-credentials , and therefore, Velero will not be able to connect with the older backups. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - azure - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 5 storageAccount: <azure_storage_account_id> 6 subscriptionId: <azure_subscription_id> 7 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 8 provider: azure default: true objectStorage: bucket: <bucket_name> 9 prefix: <prefix> 10 snapshotLocations: 11 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure 1 The openshift plugin is mandatory. 2 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 3 Set this value to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding spec.defaultVolumesToFsBackup: true to the Backup CR. In OADP version 1.1, add spec.defaultVolumesToRestic: true to the Backup CR. 4 Specify on which nodes Restic is available. By default, Restic runs on all nodes. 5 Specify the Azure resource group. 6 Specify the Azure storage account ID. 7 Specify the Azure subscription ID. 8 If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. 9 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 10 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 11 You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.4.4.4.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.4.5. Configuring the OpenShift API for Data Protection with Google Cloud Platform You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.11 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the MTC Operator and are not available as a standalone Operator. You configure GCP for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.4.5.1. Configuring Google Cloud Platform You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application. 4.4.5.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify S3-compatible object storage, such as Multicloud Object Gateway or MinIO, as a backup location. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use Restic, you do not need to specify a snapshot location because Restic backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.4.5.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-gcp . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.4.5.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-gcp . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 1 Backup location Secret with custom name. 4.4.5.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.4.5.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. 4.4.5.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.4.5.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials-gcp , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Note Velero creates a secret named velero-repo-credentials in the OADP namespace, which contains a default backup repository password. You can update the secret with your own password encoded as base64 before you run your first backup targeted to the backup repository. The value of the key to update is Data[repository-password] . After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is velero-repo-credentials , which contains either the default password or the one you replaced it with. If you update the secret password after the first backup, the new password will not match the password in velero-repo-credentials , and therefore, Velero will not be able to connect with the older backups. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: provider: gcp default: true credential: key: cloud name: cloud-credentials-gcp 5 objectStorage: bucket: <bucket_name> 6 prefix: <prefix> 7 snapshotLocations: 8 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 9 1 The openshift plugin is mandatory. 2 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 3 Set this value to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding spec.defaultVolumesToFsBackup: true to the Backup CR. In OADP version 1.1, add spec.defaultVolumesToRestic: true to the Backup CR. 4 Specify on which nodes Restic is available. By default, Restic runs on all nodes. 5 If you do not specify this value, the default name, cloud-credentials-gcp , is used. If you specify a custom name, the custom name is used for the backup location. 6 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 7 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 8 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs. 9 The snapshot location must be in the same region as the PVs. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.4.5.4.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.4.6. Configuring the OpenShift API for Data Protection with Multicloud Object Gateway You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator. The Operator installs Velero 1.11 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the MTC Operator and are not available as a standalone Operator. You configure Multicloud Object Gateway as a backup location. MCG is a component of OpenShift Data Foundation. You configure MCG as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.4.6.1. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object when you install the Data Protection Application. 4.4.6.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify S3-compatible object storage, such as Multicloud Object Gateway or MinIO, as a backup location. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use Restic, you do not need to specify a snapshot location because Restic backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.4.6.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.4.6.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: profile: "default" region: minio s3Url: <url> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.4.6.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.4.6.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. 4.4.6.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.4.6.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Note Velero creates a secret named velero-repo-credentials in the OADP namespace, which contains a default backup repository password. You can update the secret with your own password encoded as base64 before you run your first backup targeted to the backup repository. The value of the key to update is Data[repository-password] . After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is velero-repo-credentials , which contains either the default password or the one you replaced it with. If you update the secret password after the first backup, the new password will not match the password in velero-repo-credentials , and therefore, Velero will not be able to connect with the older backups. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: config: profile: "default" region: minio s3Url: <url> 5 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials 6 objectStorage: bucket: <bucket_name> 7 prefix: <prefix> 8 1 The openshift plugin is mandatory. 2 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 3 Set this value to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding spec.defaultVolumesToFsBackup: true to the Backup CR. In OADP version 1.1, add spec.defaultVolumesToRestic: true to the Backup CR. 4 Specify on which nodes Restic is available. By default, Restic runs on all nodes. 5 Specify the URL of the S3 endpoint. 6 If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 7 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 8 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.4.6.4.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.4.7. Configuring the OpenShift API for Data Protection with OpenShift Data Foundation You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application. Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the MTC Operator and are not available as a standalone Operator. You can configure Multicloud Object Gateway or any S3-compatible object storage as a backup location. Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.4.7.1. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify S3-compatible object storage, such as Multicloud Object Gateway or MinIO, as a backup location. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use Restic, you do not need to specify a snapshot location because Restic backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. Additional resources Creating an Object Bucket Claim using the OpenShift Web Console . 4.4.7.1.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.4.7.2. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.4.7.2.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. 4.4.7.2.1.1. Adjusting Ceph CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to {odf-first}. If working with {odf-short}, consult the appropriate tuning guides for official recommendations. 4.4.7.2.1.1.1. CPU and memory requirement for configurations Backup and restore operations require large amounts of CephFS PersistentVolumes (PVs). To avoid Ceph MDS pods restarting with an out-of-memory (OOM) error, the following configuration is suggested: Configuration types Request Max limit CPU Request changed to 3 Max limit to 3 Memory Request changed to 8 Gi Max limit to 128 Gi 4.4.7.2.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base46-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.4.7.3. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Note Velero creates a secret named velero-repo-credentials in the OADP namespace, which contains a default backup repository password. You can update the secret with your own password encoded as base64 before you run your first backup targeted to the backup repository. The value of the key to update is Data[repository-password] . After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is velero-repo-credentials , which contains either the default password or the one you replaced it with. If you update the secret password after the first backup, the new password will not match the password in velero-repo-credentials , and therefore, Velero will not be able to connect with the older backups. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 4.4.7.3.1. Creating an Object Bucket Claim for disaster recovery on OpenShift Data Foundation If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console. Warning Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console . 4.4.7.3.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.5. Uninstalling OADP 4.5.1. Uninstalling the OpenShift API for Data Protection You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 4.6. OADP backing up 4.6.1. Backing up applications You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR . The Backup CR creates backup files for Kubernetes resources and internal images, on S3 object storage, and snapshots for persistent volumes (PVs), if the cloud provider uses a native snapshot API or the Container Storage Interface (CSI) to create snapshots, such as OpenShift Data Foundation 4. For more information about CSI volume snapshots, see CSI volume snapshots . Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots . If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Restic. See Backing up applications with Restic . Important The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software. You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks . You can schedule backups by creating a Schedule CR instead of a Backup CR. See Scheduling backups . 4.6.1.1. Known issues OpenShift Container Platform 4.14 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases. Additional resources Installing Operators on clusters for administrators Installing Operators in namespaces for non-administrators 4.6.2. Creating a Backup CR You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs by entering the following command: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app=<label_1> app=<label_2> app=<label_3> orLabelSelectors: 6 - matchLabels: app=<label_1> app=<label_2> app=<label_3> 1 Specify an array of namespaces to back up. 2 Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. 3 Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. 4 Specify the name of the backupStorageLocations CR. 5 Map of {key,value} pairs of backup resources that have all of the specified labels. 6 Map of {key,value} pairs of backup resources that have one or more of the specified labels. Verify that the status of the Backup CR is Completed : USD oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}' 4.6.3. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR, see CSI volume snapshots . For more information see Creating a Backup CR . Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR: apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" driver: <csi_driver> deletionPolicy: Retain You can now create a Backup CR. 4.6.4. Backing up applications with Restic If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Restic. Note Restic is installed by the OADP Operator by default. Restic integration with OADP provides a solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to OADP's capabilities, not a replacement for existing functionality. You back up Kubernetes resources, internal images, and persistent volumes with Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Important Restic does not support backing up hostPath volumes. For more information, see additional Restic limitations . Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default Restic installation by setting spec.configuration.restic.enable to false in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Create the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToRestic: true 1 ... 1 Add defaultVolumesToRestic: true to the spec block. 4.6.5. Creating backup hooks When performing a backup, it is possible to specify one or more commands to execute in a container within a pod, based on the pod being backed up. The commands can be configured to performed before any custom action processing ( Pre hooks), or after all custom actions have been completed and any additional items specified by the custom action have been backed up. Post hooks run after the backup. You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11 ... 1 Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Optional: You can specify namespaces to which the hook does not apply. 3 Currently, pods are the only supported resource that hooks can apply to. 4 Optional: You can specify resources to which the hook does not apply. 5 Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all namespaces. 6 Array of hooks to run before the backup. 7 Optional: If the container is not specified, the command runs in the first container in the pod. 8 This is the entry point for the init container being added. 9 Allowed values for error handling are Fail and Continue . The default is Fail . 10 Optional: How long to wait for the commands to run. The default is 30s . 11 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 4.6.6. Scheduling backups using Schedule CR The schedule operation allows you to create a backup of your data at a specified time, defined by a Cron expression. You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR. Warning Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Schedule CR, as in the following example: USD cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToRestic: true 4 ttl: 720h0m0s EOF 1 cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. 2 Array of namespaces to back up. 3 Name of the backupStorageLocations CR. 4 Optional: Add the defaultVolumesToRestic: true key-value pair if you are backing up volumes with Restic. Verify that the status of the Schedule CR is Completed after the scheduled backup runs: USD oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}' 4.6.7. Deleting backups You can remove backup files by deleting the Backup custom resource (CR). Warning After you delete the Backup CR and the associated object storage data, you cannot recover the deleted data. Prerequisites You created a Backup CR. You know the name of the Backup CR and the namespace that contains it. You downloaded the Velero CLI tool. You can access the Velero binary in your cluster. Procedure Choose one of the following actions to delete the Backup CR: To delete the Backup CR and keep the associated object storage data, issue the following command: USD oc delete backup <backup_CR_name> -n <velero_namespace> To delete the Backup CR and delete the associated object storage data, issue the following command: USD velero backup delete <backup_CR_name> -n <velero_namespace> Where: <backup_CR_name> Specifies the name of the Backup custom resource. <velero_namespace> Specifies the namespace that contains the Backup custom resource. 4.6.8. About Kopia Kopia is a fast and secure open-source backup and restore tool that allows you to create encrypted snapshots of your data and save the snapshots to remote or cloud storage of your choice. Kopia supports network and local storage locations, and many cloud or remote storage locations, including: Amazon S3 and any cloud storage that is compatible with S3 Azure Blob Storage Google Cloud Storage Platform Kopia uses content-addressable storage for snapshots: Each snapshot is always incremental. This means that all data is uploaded once to the repository, based on file content. A file is only uploaded to the repository again if it is modified. Multiple copies of the same file are stored once, meaning deduplication. After moving or renaming large files, Kopia can recognize that they have the same content and does not upload them again. 4.6.8.1. OADP integration with Kopia OADP 1.3 supports Kopia as the backup mechanism for pod volume backup in addition to Restic. You must choose one or the other at installation by setting the uploaderType field in the DataProtectionApplication custom resource (CR). The possible values are restic or kopia . If you do not specify an uploaderType , OADP 1.3 defaults to using Kopia as the backup mechanism. The data is written to and read from a unified repository. DataProtectionApplication configuration for Kopia apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... 4.7. OADP restoring 4.7.1. Restoring applications You restore application backups by creating a Restore custom resource (CR). See Creating a Restore CR . You can create restore hooks to run commands in a container in a pod while restoring your application by editing the Restore (CR). See Creating restore hooks 4.7.1.1. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. Adjust the requested size so the persistent volume (PV) capacity matches the requested size at backup time. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3 1 Name of the Backup CR. 2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods ) or fully-qualified. If unspecified, all resources are included. 3 Optional: The restorePVs parameter can be set to false in order to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots, or from native snapshots when VolumeSnapshotLocation is configured. Verify that the status of the Restore CR is Completed by entering the following command: USD oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored by entering the following command: USD oc get all -n <namespace> 1 1 Namespace that you backed up. If you use Restic to restore DeploymentConfig objects or if you use post-restore hooks, run the dc-restic-post-restore.sh cleanup script by entering the following command: USD bash dc-restic-post-restore.sh <restore-name> Note In the course of the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow Restic and post-restore hooks to complete their actions on the restored pods. The cleanup script removes these disconnected pods and scale any DeploymentConfig objects back up to the appropriate number of replicas. Example 4.1. dc-restic-post-restore.sh cleanup script #!/bin/bash set -e # if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD="sha256sum" else CHECKSUM_CMD="shasum -a 256" fi label_name () { if [ "USD{#1}" -le "63" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo "USD{1:0:57}USD{sha:0:6}" } OADP_NAMESPACE=USD{OADP_NAMESPACE:=openshift-adp} if [[ USD# -ne 1 ]]; then echo "usage: USD{BASH_SOURCE} restore-name" exit 1 fi echo using OADP Namespace USDOADP_NAMESPACE echo restore: USD1 label=USD(label_name USD1) echo label: USDlabel echo Deleting disconnected restore pods oc delete pods -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}') do IFS=',' read -ra dc_arr <<< "USDdc" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done 4.7.1.2. Creating restore hooks You create restore hooks to run commands in a container in a pod while restoring your application by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource that hooks can apply to. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: Timeout specifies the maximum amount of time Velero waits for initContainers to complete. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 This is the entrypoint for the init container being added. 7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely. 8 Optional: How long to wait for the commands to run. The default is 30s . 9 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . 4.8. OADP Data Mover 4.8.1. OADP Data Mover Introduction OADP Data Mover allows you to restore stateful applications from the store if a failure, accidental deletion, or corruption of the cluster occurs. Note The OADP 1.1 Data Mover is a Technology Preview feature. The OADP 1.2 Data Mover has significantly improved features and performances, but is still a Technology Preview feature. Important The OADP Data Mover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. See Using Data Mover for CSI snapshots . You can use OADP 1.2 Data Mover to backup and restore application data for clusters that use CephFS, CephRBD, or both. See Using OADP 1.2 Data Mover with Ceph storage . You must perform a data cleanup after you perform a backup, if you are using OADP 1.1 Data Mover. See Cleaning up after a backup using OADP 1.1 Data Mover . Note Post-migration hooks are not likely to work well with the OADP 1.3 Data Mover. The OADP 1.1 and OADP 1.2 Data Movers use synchronous processes to back up and restore application data. Because the processes are synchronous, users can be sure that any post-restore hooks start only after the persistent volumes (PVs) of the related pods are released by the persistent volume claim (PVC) of the Data Mover. However, the OADP 1.3 Data Mover uses an asynchronous process. As a result of this difference in sequencing, a post-restore hook might be called before the related PVs were released by the PVC of the Data Mover. If this happens, the pod remains in Pending status and cannot run the hook. The hook attempt might time out before the pod is released, leading to a PartiallyFailed restore operation. 4.8.1.1. OADP Data Mover prerequisites You have a stateful application running in a separate namespace. You have installed the OADP Operator by using Operator Lifecycle Manager (OLM). You have created an appropriate VolumeSnapshotClass and StorageClass . You have installed the VolSync operator using OLM. 4.8.2. Using Data Mover for CSI snapshots The OADP Data Mover enables customers to back up Container Storage Interface (CSI) volume snapshots to a remote object store. When Data Mover is enabled, you can restore stateful applications, using CSI volume snapshots pulled from the object store if a failure, accidental deletion, or corruption of the cluster occurs. The Data Mover solution uses the Restic option of VolSync. Data Mover supports backup and restore of CSI volume snapshots only. In OADP 1.2 Data Mover VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs) are queued using the VolumeSnapshotMover (VSM). The VSM's performance is improved by specifying a concurrent number of VSBs and VSRs simultaneously InProgress . After all async plugin operations are complete, the backup is marked as complete. Note The OADP 1.1 Data Mover is a Technology Preview feature. The OADP 1.2 Data Mover has significantly improved features and performances, but is still a Technology Preview feature. Important The OADP Data Mover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Red Hat recommends that customers who use OADP 1.2 Data Mover in order to back up and restore ODF CephFS volumes, upgrade or install OpenShift Container Platform version 4.12 or later for improved performance. OADP Data Mover can leverage CephFS shallow volumes in OpenShift Container Platform version 4.12 or later, which based on our testing, can improve the performance of backup times. CephFS ROX details Prerequisites You have verified that the StorageClass and VolumeSnapshotClass custom resources (CRs) support CSI. You have verified that only one VolumeSnapshotClass CR has the annotation snapshot.storage.kubernetes.io/is-default-class: "true" . Note In OpenShift Container Platform version 4.12 or later, verify that this is the only default VolumeSnapshotClass . You have verified that deletionPolicy of the VolumeSnapshotClass CR is set to Retain . You have verified that only one StorageClass CR has the annotation storageclass.kubernetes.io/is-default-class: "true" . You have included the label velero.io/csi-volumesnapshot-class: "true" in your VolumeSnapshotClass CR. You have verified that the OADP namespace has the annotation oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers="true" . Note In OADP 1.1 the above setting is mandatory. In OADP 1.2 the privileged-movers setting is not required in most scenarios. The restoring container permissions should be adequate for the Volsync copy. In some user scenarios, there may be permission errors that the privileged-mover = true setting should resolve. You have installed the VolSync Operator by using the Operator Lifecycle Manager (OLM). Note The VolSync Operator is required for using OADP Data Mover. You have installed the OADP operator by using OLM. Procedure Configure a Restic secret by creating a .yaml file as following: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-adp type: Opaque stringData: RESTIC_PASSWORD: <secure_restic_password> Note By default, the Operator looks for a secret named dm-credential . If you are using a different name, you need to specify the name through a Data Protection Application (DPA) CR using dpa.spec.features.dataMover.credentialName . Create a DPA CR similar to the following example. The default plugins include CSI. Example Data Protection Application (DPA) CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: <bucket-prefix> provider: aws configuration: restic: enable: <true_or_false> velero: itemOperationSyncFrequency: "10s" defaultPlugins: - openshift - aws - csi - vsm 1 features: dataMover: credentialName: restic-secret enable: true maxConcurrentBackupVolumes: "3" 2 maxConcurrentRestoreVolumes: "3" 3 pruneInterval: "14" 4 volumeOptions: 5 sourceVolumeOptions: accessMode: ReadOnlyMany cacheAccessMode: ReadWriteOnce cacheCapacity: 2Gi destinationVolumeOptions: storageClass: other-storageclass-name cacheAccessMode: ReadWriteMany snapshotLocations: - velero: config: profile: default region: us-west-2 provider: aws 1 OADP 1.2 only. 2 OADP 1.2 only. Optional: Specify the upper limit of the number of snapshots allowed to be queued for backup. The default value is 10. 3 OADP 1.2 only. Optional: Specify the upper limit of the number of snapshots allowed to be queued for restore. The default value is 10. 4 OADP 1.2 only. Optional: Specify the number of days, between running Restic pruning on the repository. The prune operation repacks the data to free space, but it can also generate significant I/O traffic as a part of the process. Setting this option allows a trade-off between storage consumption, from no longer referenced data, and access costs. 5 OADP 1.2 only. Optional: Specify VolumeSync volume options for backup and restore. The OADP Operator installs two custom resource definitions (CRDs), VolumeSnapshotBackup and VolumeSnapshotRestore . Example VolumeSnapshotBackup CRD apiVersion: datamover.oadp.openshift.io/v1alpha1 kind: VolumeSnapshotBackup metadata: name: <vsb_name> namespace: <namespace_name> 1 spec: volumeSnapshotContent: name: <snapcontent_name> protectedNamespace: <adp_namespace> 2 resticSecretRef: name: <restic_secret_name> 1 Specify the namespace where the volume snapshot exists. 2 Specify the namespace where the OADP Operator is installed. The default is openshift-adp . Example VolumeSnapshotRestore CRD apiVersion: datamover.oadp.openshift.io/v1alpha1 kind: VolumeSnapshotRestore metadata: name: <vsr_name> namespace: <namespace_name> 1 spec: protectedNamespace: <protected_ns> 2 resticSecretRef: name: <restic_secret_name> volumeSnapshotMoverBackupRef: sourcePVCData: name: <source_pvc_name> size: <source_pvc_size> resticrepository: <your_restic_repo> volumeSnapshotClassName: <vsclass_name> 1 Specify the namespace where the volume snapshot exists. 2 Specify the namespace where the OADP Operator is installed. The default is openshift-adp . You can back up a volume snapshot by performing the following steps: Create a backup CR: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> 1 spec: includedNamespaces: - <app_ns> 2 storageLocation: velero-sample-1 1 Specify the namespace where the Operator is installed. The default namespace is openshift-adp . 2 Specify the application namespace or namespaces to be backed up. Wait up to 10 minutes and check whether the VolumeSnapshotBackup CR status is Completed by entering the following commands: USD oc get vsb -n <app_ns> USD oc get vsb <vsb_name> -n <app_ns> -o jsonpath="{.status.phase}" A snapshot is created in the object store was configured in the DPA. Note If the status of the VolumeSnapshotBackup CR becomes Failed , refer to the Velero logs for troubleshooting. You can restore a volume snapshot by performing the following steps: Delete the application namespace and the VolumeSnapshotContent that was created by the Velero CSI plugin. Create a Restore CR and set restorePVs to true . Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name> restorePVs: true Wait up to 10 minutes and check whether the VolumeSnapshotRestore CR status is Completed by entering the following command: USD oc get vsr -n <app_ns> USD oc get vsr <vsr_name> -n <app_ns> -o jsonpath="{.status.phase}" Check whether your application data and resources have been restored. Note If the status of the VolumeSnapshotRestore CR becomes 'Failed', refer to the Velero logs for troubleshooting. 4.8.3. Using OADP 1.2 Data Mover with Ceph storage You can use OADP 1.2 Data Mover to backup and restore application data for clusters that use CephFS, CephRBD, or both. OADP 1.2 Data Mover leverages Ceph features that support large-scale environments. One of these is the shallow copy method, which is available for OpenShift Container Platform 4.12 and later. This feature supports backing up and restoring StorageClass and AccessMode resources other than what is found on the source persistent volume claim (PVC). Important The CephFS shallow copy feature is a back up feature. It is not part of restore operations. 4.8.3.1. Prerequisites for using OADP 1.2 Data Mover with Ceph storage The following prerequisites apply to all back up and restore operations of data using OpenShift API for Data Protection (OADP) 1.2 Data Mover in a cluster that uses Ceph storage: You have installed OpenShift Container Platform 4.12 or later. You have installed the OADP Operator. You have created a secret cloud-credentials in the namespace openshift-adp. You have installed Red Hat OpenShift Data Foundation. You have installed the latest VolSync Operator by using Operator Lifecycle Manager. 4.8.3.2. Defining custom resources for use with OADP 1.2 Data Mover When you install Red Hat OpenShift Data Foundation, it automatically creates default CephFS and a CephRBD StorageClass and VolumeSnapshotClass custom resources (CRs). You must define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover. After you define the CRs, you must make several other changes to your environment before you can perform your back up and restore operations. 4.8.3.2.1. Defining CephFS custom resources for use with OADP 1.2 Data Mover When you install Red Hat OpenShift Data Foundation, it automatically creates a default CephFS StorageClass custom resource (CR) and a default CephFS VolumeSnapshotClass CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover. Procedure Define the VolumeSnapshotClass CR as in the following example: Example VolumeSnapshotClass CR apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain 1 driver: openshift-storage.cephfs.csi.ceph.com kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 labels: velero.io/csi-volumesnapshot-class: true 3 name: ocs-storagecluster-cephfsplugin-snapclass parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage 1 Must be set to Retain . 2 Must be set to true . 3 Must be set to true . Define the StorageClass CR as in the following example: Example StorageClass CR kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ocs-storagecluster-cephfs annotations: description: Provides RWO and RWX Filesystem volumes storageclass.kubernetes.io/is-default-class: true 1 provisioner: openshift-storage.cephfs.csi.ceph.com parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate 1 Must be set to true . 4.8.3.2.2. Defining CephRBD custom resources for use with OADP 1.2 Data Mover When you install Red Hat OpenShift Data Foundation, it automatically creates a default CephRBD StorageClass custom resource (CR) and a default CephRBD VolumeSnapshotClass CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover. Procedure Define the VolumeSnapshotClass CR as in the following example: Example VolumeSnapshotClass CR apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain 1 driver: openshift-storage.rbd.csi.ceph.com kind: VolumeSnapshotClass metadata: labels: velero.io/csi-volumesnapshot-class: true 2 name: ocs-storagecluster-rbdplugin-snapclass parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage 1 Must be set to Retain . 2 Must be set to true . Define the StorageClass CR as in the following example: Example StorageClass CR kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ocs-storagecluster-ceph-rbd annotations: description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes' provisioner: openshift-storage.rbd.csi.ceph.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner imageFormat: '2' clusterID: openshift-storage imageFeatures: layering csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage pool: ocs-storagecluster-cephblockpool csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate 4.8.3.2.3. Defining additional custom resources for use with OADP 1.2 Data Mover After you redefine the default StorageClass and CephRBD VolumeSnapshotClass custom resources (CRs), you must create the following CRs: A CephFS StorageClass CR defined to use the shallow copy feature A Restic Secret CR Procedure Create a CephFS StorageClass CR and set the backingSnapshot parameter set to true as in the following example: Example CephFS StorageClass CR with backingSnapshot set to true kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ocs-storagecluster-cephfs-shallow annotations: description: Provides RWO and RWX Filesystem volumes storageclass.kubernetes.io/is-default-class: false provisioner: openshift-storage.cephfs.csi.ceph.com parameters: csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage backingSnapshot: true 1 csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate 1 Must be set to true . Important Ensure that the CephFS VolumeSnapshotClass and StorageClass CRs have the same value for provisioner . Configure a Restic Secret CR as in the following example: Example Restic Secret CR apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: <namespace> type: Opaque stringData: RESTIC_PASSWORD: <restic_password> 4.8.3.3. Backing up and restoring data using OADP 1.2 Data Mover and CephFS storage You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage by enabling the shallow copy feature of CephFS. Prerequisites A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner. The StorageClass and VolumeSnapshotClass custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover. There is a secret cloud-credentials in the openshift-adp namespace. 4.8.3.3.1. Creating a DPA for use with CephFS storage You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage. Procedure Verify that the deletionPolicy field of the VolumeSnapshotClass CR is set to Retain by running the following command: USD oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}' Verify that the labels of the VolumeSnapshotClass CR are set to true by running the following command: USD oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"labels: "}{.metadata.labels}{"\n"}{end}' Verify that the storageclass.kubernetes.io/is-default-class annotation of the StorageClass CR is set to true by running the following command: USD oc get storageClass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"annotations: "}{.metadata.annotations}{"\n"}{end}' Create a Data Protection Application (DPA) CR similar to the following example: Example DPA CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <my_bucket> prefix: velero provider: aws configuration: restic: enable: false 1 velero: defaultPlugins: - openshift - aws - csi - vsm features: dataMover: credentialName: <restic_secret_name> 2 enable: true 3 volumeOptionsForStorageClasses: ocs-storagecluster-cephfs: sourceVolumeOptions: accessMode: ReadOnlyMany cacheAccessMode: ReadWriteMany cacheStorageClassName: ocs-storagecluster-cephfs storageClassName: ocs-storagecluster-cephfs-shallow 1 There is no default value for the enable field. Valid values are true or false . 2 Use the Restic Secret that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not use your Restic Secret , the CR uses the default value dm-credential for this parameter. 3 There is no default value for the enable field. Valid values are true or false . 4.8.3.3.2. Backing up data using OADP 1.2 Data Mover and CephFS storage You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data using CephFS storage by enabling the shallow copy feature of CephFS storage. Procedure Create a Backup CR as in the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> spec: includedNamespaces: - <app_ns> storageLocation: velero-sample-1 Monitor the progress of the VolumeSnapshotBackup CRs by completing the following steps: To check the progress of all the VolumeSnapshotBackup CRs, run the following command: USD oc get vsb -n <app_ns> To check the progress of a specific VolumeSnapshotBackup CR, run the following command: USD oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}` Wait several minutes until the VolumeSnapshotBackup CR has the status Completed . Verify that there is at least one snapshot in the object store that is given in the Restic Secret . You can check for this snapshot in your targeted BackupStorageLocation storage provider that has a prefix of /<OADP_namespace> . 4.8.3.3.3. Restoring data using OADP 1.2 Data Mover and CephFS storage You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data using CephFS storage if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure. Procedure Delete the application namespace by running the following command: USD oc delete vsb -n <app_namespace> --all Delete any VolumeSnapshotContent CRs that were created during backup by running the following command: USD oc delete volumesnapshotcontent --all Create a Restore CR as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name> Monitor the progress of the VolumeSnapshotRestore CRs by doing the following: To check the progress of all the VolumeSnapshotRestore CRs, run the following command: USD oc get vsr -n <app_ns> To check the progress of a specific VolumeSnapshotRestore CR, run the following command: USD oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase} Verify that your application data has been restored by running the following command: USD oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}" 4.8.3.4. Backing up and restoring data using OADP 1.2 Data Mover and split volumes (CephFS and Ceph RBD) You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data in an environment that has split volumes , that is, an environment that uses both CephFS and CephRBD. Prerequisites A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner. The StorageClass and VolumeSnapshotClass custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover. There is a secret cloud-credentials in the openshift-adp namespace. 4.8.3.4.1. Creating a DPA for use with split volumes You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using split volumes. Procedure Create a Data Protection Application (DPA) CR as in the following example: Example DPA CR for environment with split volumes apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <my-bucket> prefix: velero provider: aws configuration: restic: enable: false velero: defaultPlugins: - openshift - aws - csi - vsm features: dataMover: credentialName: <restic_secret_name> 1 enable: true volumeOptionsForStorageClasses: 2 ocs-storagecluster-cephfs: sourceVolumeOptions: accessMode: ReadOnlyMany cacheAccessMode: ReadWriteMany cacheStorageClassName: ocs-storagecluster-cephfs storageClassName: ocs-storagecluster-cephfs-shallow ocs-storagecluster-ceph-rbd: sourceVolumeOptions: storageClassName: ocs-storagecluster-ceph-rbd cacheStorageClassName: ocs-storagecluster-ceph-rbd destinationVolumeOptions: storageClassName: ocs-storagecluster-ceph-rbd cacheStorageClassName: ocs-storagecluster-ceph-rbd 1 Use the Restic Secret that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not, then the CR will use the default value dm-credential for this parameter. 2 A different set of VolumeOptionsForStorageClass labels can be defined for each storageClass volume, thus allowing a backup to volumes with different providers. 4.8.3.4.2. Backing up data using OADP 1.2 Data Mover and split volumes You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data in an environment that has split volumes. Procedure Create a Backup CR as in the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> spec: includedNamespaces: - <app_ns> storageLocation: velero-sample-1 Monitor the progress of the VolumeSnapshotBackup CRs by completing the following steps: To check the progress of all the VolumeSnapshotBackup CRs, run the following command: USD oc get vsb -n <app_ns> To check the progress of a specific VolumeSnapshotBackup CR, run the following command: USD oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}` Wait several minutes until the VolumeSnapshotBackup CR has the status Completed . Verify that there is at least one snapshot in the object store that is given in the Restic Secret . You can check for this snapshot in your targeted BackupStorageLocation storage provider that has a prefix of /<OADP_namespace> . 4.8.3.4.3. Restoring data using OADP 1.2 Data Mover and split volumes You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data in an environment that has split volumes, if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure. Procedure Delete the application namespace by running the following command: USD oc delete vsb -n <app_namespace> --all Delete any VolumeSnapshotContent CRs that were created during backup by running the following command: USD oc delete volumesnapshotcontent --all Create a Restore CR as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name> Monitor the progress of the VolumeSnapshotRestore CRs by doing the following: To check the progress of all the VolumeSnapshotRestore CRs, run the following command: USD oc get vsr -n <app_ns> To check the progress of a specific VolumeSnapshotRestore CR, run the following command: USD oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase} Verify that your application data has been restored by running the following command: USD oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}" 4.8.4. Cleaning up after a backup using OADP 1.1 Data Mover For OADP 1.1 Data Mover, you must perform a data cleanup after you perform a backup. The cleanup consists of deleting the following resources: Snapshots in a bucket Cluster resources Volume snapshot backups (VSBs) after a backup procedure that is either run by a schedule or is run repetitively 4.8.4.1. Deleting snapshots in a bucket Data Mover might leave one or more snapshots in a bucket after a backup. You can either delete all the snapshots or delete individual snapshots. Procedure To delete all snapshots in your bucket, delete the /<protected_namespace> folder that is specified in the Data Protection Application (DPA) .spec.backupLocation.objectStorage.bucket resource. To delete an individual snapshot: Browse to the /<protected_namespace> folder that is specified in the DPA .spec.backupLocation.objectStorage.bucket resource. Delete the appropriate folders that are prefixed with /<volumeSnapshotContent name>-pvc where <VolumeSnapshotContent_name> is the VolumeSnapshotContent created by Data Mover per PVC. 4.8.4.2. Deleting cluster resources OADP 1.1 Data Mover might leave cluster resources whether or not it successfully backs up your container storage interface (CSI) volume snapshots to a remote object store. 4.8.4.2.1. Deleting cluster resources following a successful backup and restore that used Data Mover You can delete any VolumeSnapshotBackup or VolumeSnapshotRestore CRs that remain in your application namespace after a successful backup and restore where you used Data Mover. Procedure Delete cluster resources that remain on the application namespace, the namespace with the application PVCs to backup and restore, after a backup where you use Data Mover: USD oc delete vsb -n <app_namespace> --all Delete cluster resources that remain after a restore where you use Data Mover: USD oc delete vsr -n <app_namespace> --all If needed, delete any VolumeSnapshotContent resources that remain after a backup and restore where you use Data Mover: USD oc delete volumesnapshotcontent --all 4.8.4.2.2. Deleting cluster resources following a partially successful or a failed backup and restore that used Data Mover If your backup and restore operation that uses Data Mover either fails or only partially succeeds, you must clean up any VolumeSnapshotBackup (VSB) or VolumeSnapshotRestore custom resource definitions (CRDs) that exist in the application namespace, and clean up any extra resources created by these controllers. Procedure Clean up cluster resources that remain after a backup operation where you used Data Mover by entering the following commands: Delete VSB CRDs on the application namespace, the namespace with the application PVCs to backup and restore: USD oc delete vsb -n <app_namespace> --all Delete VolumeSnapshot CRs: USD oc delete volumesnapshot -A --all Delete VolumeSnapshotContent CRs: USD oc delete volumesnapshotcontent --all Delete any PVCs on the protected namespace, the namespace the Operator is installed on. USD oc delete pvc -n <protected_namespace> --all Delete any ReplicationSource resources on the namespace. USD oc delete replicationsource -n <protected_namespace> --all Clean up cluster resources that remain after a restore operation using Data Mover by entering the following commands: Delete VSR CRDs: USD oc delete vsr -n <app-ns> --all Delete VolumeSnapshot CRs: USD oc delete volumesnapshot -A --all Delete VolumeSnapshotContent CRs: USD oc delete volumesnapshotcontent --all Delete any ReplicationDestination resources on the namespace. USD oc delete replicationdestination -n <protected_namespace> --all 4.9. OADP 1.3 Data Mover 4.9.1. About the OADP 1.3 Data Mover OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and write to the unified repository. OADP supports CSI snapshots on the following: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Important The OADP built-in Data Mover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.9.1.1. Enabling the built-in Data Mover To enable the built-in Data Mover, you must include the CSI plugin and enable the node agent in the DataProtectionApplication custom resource (CR). The node agent is a Kubernetes daemonset that hosts data movement modules. These include the Data Mover controller, uploader, and the repository. Example DataProtectionApplication manifest apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 # ... 1 The flag to enable the node agent. 2 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. 3 The CSI plugin included in the list of default plugins. 4.9.1.2. Built-in Data Mover controller and custom resource definitions (CRDs) The built-in Data Mover feature introduces three new API objects defined as CRDs for managing backup and restore: DataDownload : Represents a data download of a volume snapshot. The CSI plugin creates one DataDownload object per volume to be restored. The DataDownload CR includes information about the target volume, the specified Data Mover, the progress of the current data download, the specified backup repository, and the result of the current data download after the process is complete. DataUpload : Represents a data upload of a volume snapshot. The CSI plugin creates one DataUpload object per CSI snapshot. The DataUpload CR includes information about the specified snapshot, the specified Data Mover, the specified backup repository, the progress of the current data upload, and the result of the current data upload after the process is complete. BackupRepository : Represents and manages the lifecycle of the backup repositories. OADP creates a backup repository per namespace when the first CSI snapshot backup or restore for a namespace is requested. 4.9.2. Backing up and restoring CSI snapshots You can back up and restore persistent volumes by using the OADP 1.3 Data Mover. 4.9.2.1. Backing up persistent volumes with CSI snapshots You can use the OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. Prerequisites You have access to the cluster with the cluster-admin role. You have installed the OADP Operator. You have included the CSI plugin and enabled the node agent in the DataProtectionApplication custom resource (CR). You have an application with persistent volumes running in a separate namespace. You have added the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR. Procedure Create a YAML file for the Backup object, as in the following example: Example Backup CR kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 1 storageLocation: default ttl: 720h0m0s volumeSnapshotLocations: - dpa-sample-1 # ... 1 Set to true to enable movement of CSI snapshots to remote object storage. Apply the manifest: USD oc create -f backup.yaml A DataUpload CR is created after the snapshot creation is complete. Verification Verify that the snapshot data is successfully transferred to the remote object store by monitoring the status.phase field of the DataUpload CR. Possible values are In Progress , Completed , Failed , or Canceled . The object store is configured in the backupLocations stanza of the DataProtectionApplication CR. Run the following command to get a list of all DataUpload objects: USD oc get datauploads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal Check the value of the status.phase field of the specific DataUpload object by running the following command: USD oc get datauploads <dataupload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: "" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: "2023-11-02T16:57:02Z" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: "2023-11-02T16:56:22Z" 1 Indicates that snapshot data is successfully transferred to the remote object store. 4.9.2.2. Restoring CSI volume snapshots You can restore a volume snapshot by creating a Restore CR. Note You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3. Prerequisites You have access to the cluster with the cluster-admin role. You have an OADP Backup CR from which to restore the data. Procedure Create a YAML file for the Restore CR, as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup> # ... Apply the manifest: USD oc create -f restore.yaml A DataDownload CR is created when the restore starts. Verification You can monitor the status of the restore process by checking the status.phase field of the DataDownload CR. Possible values are In Progress , Completed , Failed , or Canceled . To get a list of all DataDownload objects, run the following command: USD oc get datadownloads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal Enter the following command to check the value of the status.phase field of the specific DataDownload object: USD oc get datadownloads <datadownload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: "" pvc: mysql status: completionTimestamp: "2023-11-02T17:01:24Z" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: "2023-11-02T17:00:52Z" 1 Indicates that the CSI snapshot data is successfully restored. 4.10. Troubleshooting You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool . The Velero CLI tool provides more detailed logs and information. You can check installation issues , backup and restore CR issues , and Restic issues . You can collect logs and CR information by using the must-gather tool . You can obtain the Velero CLI tool by: Downloading the Velero CLI tool Accessing the Velero binary in the Velero deployment in the cluster 4.10.1. Downloading the Velero CLI tool You can download and install the Velero CLI tool by following the instructions on the Velero documentation page . The page includes instructions for: macOS by using Homebrew GitHub Windows by using Chocolatey Prerequisites You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. You have installed kubectl locally. Procedure Open a browser and navigate to "Install the CLI" on the Velero website . Follow the appropriate procedure for macOS, GitHub, or Windows. Download the Velero version appropriate for your version of OADP and OpenShift Container Platform. 4.10.1.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.1.0 1.9 4.9 and later 1.1.1 1.9 4.9 and later 1.1.2 1.9 4.9 and later 1.1.3 1.9 4.9 and later 1.1.4 1.9 4.9 and later 1.1.5 1.9 4.9 and later 1.1.6 1.9 4.11 and later 1.1.7 1.9 4.11 and later 1.2.0 1.11 4.11 and later 1.2.1 1.11 4.11 and later 1.2.2 1.11 4.11 and later 1.2.3 1.11 4.11 and later 4.10.2. Accessing the Velero binary in the Velero deployment in the cluster You can use a shell command to access the Velero binary in the Velero deployment in the cluster. Prerequisites Your DataProtectionApplication custom resource has a status of Reconcile complete . Procedure Enter the following command to set the needed alias: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' 4.10.3. Debugging Velero resources with the OpenShift CLI tool You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool. Velero CRs Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc describe <velero_cr> <cr_name> Velero pod logs Use the oc logs command to retrieve the Velero pod logs: USD oc logs pod/<velero> Velero pod debug logs You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example. Note This option is available starting from OADP 1.0.3. apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning The following logLevel values are available: trace debug info warning error fatal panic It is recommended to use debug for most logs. 4.10.4. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 4.10.5. Pods crash or restart due to lack of memory or CPU If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources. Additional resources CPU and memory requirements 4.10.5.1. Setting resource requests for a Velero pod You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Velero file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi 1 The resourceAllocations listed are for average usage. 4.10.5.2. Setting resource requests for a Restic pod You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Restic file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi 1 The resourceAllocations listed are for average usage. Important The values for the resource request fields must follow the same format as Kubernetes resource requirements. Also, if you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations , the default resources specification for a Velero pod or a Restic pod is as follows: requests: cpu: 500m memory: 128Mi 4.10.6. Issues with Velero and admission webhooks Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload. Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources. For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use. 4.10.6.1. Restoring workarounds for Velero backups that use admission webhooks This section describes the additional steps required to restore resources for several types of Velero backups that use admission webhooks. 4.10.6.1.1. Restoring Knative resources You might encounter problems using Velero to back up Knative resources that use admission webhooks. You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks. Procedure Restore the top level service.serving.knavtive.dev Service resource: USD velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.dev 4.10.6.1.2. Restoring IBM AppConnect resources If you experience issues when you use Velero to a restore an IBM AppConnect resource that has an admission webhook, you can run the checks in this procedure. Procedure Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster: USD oc get mutatingwebhookconfigurations Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation . Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator. 4.10.6.2. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. Additional resources Admission plugins Webhook admission plugins Types of webhook admission plugins 4.10.7. Installation issues You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application. 4.10.7.1. Backup storage contains invalid directories The Velero pod log displays the error message, Backup storage contains invalid top-level directories . Cause The object storage contains top-level directories that are not Velero directories. Solution If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest. 4.10.7.2. Incorrect AWS credentials The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain . Cause The credentials-velero file used to create the Secret object is incorrectly formatted. Solution Ensure that the credentials-velero file is correctly formatted, as in the following example: Example credentials-velero file 1 AWS default profile. 2 Do not enclose the values with quotation marks ( " , ' ). 4.10.8. OADP Operator issues The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve. 4.10.8.1. OADP Operator fails silently The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <OADP_Operator_namespace> , you see that the Operator has a status of Running . In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running. Cause The problem is caused when cloud credentials provide insufficient permissions. Solution Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues. Procedure Run one of the following commands to retrieve a list of BSLs: Using the OpenShift CLI: USD oc get backupstoragelocation -A Using the Velero CLI: USD velero backup-location get -n <OADP_Operator_namespace> Using the list of BSLs, run the following command to display the manifest of each BSL, and examine each manifest for an error. USD oc get backupstoragelocation -n <namespace> -o yaml Example result apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: "2023-11-03T19:49:04Z" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: "24273698" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: "true" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: "2023-11-10T22:06:46Z" message: "BackupStorageLocation \"example-dpa-1\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54" phase: Unavailable kind: List metadata: resourceVersion: "" 4.10.9. OADP timeouts Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce the likelihood of errors, retries, or failures. Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Carefully consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance. The following are various OADP timeouts, with instructions of how and when to implement these parameters: 4.10.9.1. Restic timeout timeout defines the Restic timeout. The default value is 1h . Use the Restic timeout for the following scenarios: For Restic backups with total PV data usage that is greater than 500GB. If backups are timing out with the following error: level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" Procedure Edit the values in the spec.configuration.restic.timeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: restic: timeout: 1h # ... 4.10.9.2. Velero resource timeout resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m . Use the resourceTimeout for the following scenarios: For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete. A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task. To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia. To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup. Procedure Edit the values in the spec.configuration.velero.resourceTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m # ... 4.10.9.3. Data Mover timeout timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore . The default value is 10m . Use the Data Mover timeout for the following scenarios: If creation of VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs), times out after 10 minutes. For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for 1h . With the VolumeSnapshotMover (VSM) plugin. Only with OADP 1.1.x. Procedure Edit the values in the spec.features.dataMover.timeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m # ... 4.10.9.4. CSI snapshot timeout CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse , before returning error as timeout. The default value is 10m . Use the CSISnapshotTimeout for the following scenarios: With the CSI plugin. For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs. Note Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes. Procedure Edit the values in the spec.csiSnapshotTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m # ... 4.10.9.5. Velero default item operation timeout defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h . Use the defaultItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature. When defaultItemOperationTimeout is defined in the Data Protection Application (DPA) using the defaultItemOperationTimeout , it applies to both backup and restore operations. You can use itemOperationTimeout to define only the backup or only the restore of those CRs, as described in the following "Item operation timeout - restore", and "Item operation timeout - backup" sections. Procedure Edit the values in the spec.configuration.velero.defaultItemOperationTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h # ... 4.10.9.6. Item operation timeout - restore ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h . Use the restore ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Restore.spec.itemOperationTimeout block of the Restore CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h # ... 4.10.9.7. Item operation timeout - backup ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h . Use the backup ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Backup.spec.itemOperationTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h # ... 4.10.10. Backup and Restore CR issues You might encounter these common issues with Backup and Restore custom resources (CRs). 4.10.10.1. Backup CR cannot retrieve volume The Backup CR displays the error message, InvalidVolume.NotFound: The volume 'vol-xxxx' does not exist . Cause The persistent volume (PV) and the snapshot locations are in different regions. Solution Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV. Create a new Backup CR. 4.10.10.2. Backup CR status remains in progress The status of a Backup CR remains in the InProgress phase and does not complete. Cause If a backup is interrupted, it cannot be resumed. Solution Retrieve the details of the Backup CR: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup> Delete the Backup CR: USD oc delete backup <backup> -n openshift-adp You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage. Create a new Backup CR. 4.10.10.3. Backup CR status remains in PartiallyFailed The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and does not complete. A snapshot of the affiliated PVC is not created. Cause If the backup is created based on the CSI snapshot class, but the label is missing, CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following: + time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq Solution Delete the Backup CR: USD oc delete backup <backup> -n openshift-adp If required, clean up the stored data on the BackupStorageLocation to free up space. Apply label velero.io/csi-volumesnapshot-class=true to the VolumeSnapshotClass object: USD oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true Create a new Backup CR. 4.10.11. Restic issues You might encounter these issues when you back up applications with Restic. 4.10.11.1. Restic permission error for NFS data volumes with root_squash enabled The Restic pod log displays the error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied" . Cause If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups. Solution You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest: Create a supplemental group for Restic on the NFS data volume. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the spec.configuration.restic.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as in the following example: spec: configuration: restic: enable: true supplementalGroups: - <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 4.10.11.2. Restic Backup CR cannot be recreated after bucket is emptied If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails. The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location? . Cause Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information. Solution Remove the related Restic repository from the namespace by running the following command: USD oc delete resticrepository openshift-adp <name_of_the_restic_repository> In the following error log, mysql-persistent is the problematic Restic repository. The name of the repository appears in italics for clarity. time="2021-12-29T18:29:14Z" level=info msg="1 errors encountered backup up item" backup=velero/backup65 logSource="pkg/backup/backup.go:431" name=mysql-7d99fc949-qbkds time="2021-12-29T18:29:14Z" level=error msg="Error backing up item" backup=velero/backup65 error="pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \n: exit status 1" error.file="/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184" error.function="github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:435" name=mysql-7d99fc949-qbkds 4.10.12. Using the must-gather tool You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1 \ -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . 4.10.12.1. Combining options when using the must-gather tool Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following example: USD oc adm must-gather --image=brew.registry.redhat.io/rh-osbs/oadp-oadp-mustgather-rhel8:1.1.1-8 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> In this example, set the skip_tls variable before running the gather_with_timeout script. The result is a combination of gather_with_timeout and gather_without_tls . The only other variables that you can specify this way are the following: logs_since , with a default value of 72h request_timeout , with a default value of 0s 4.10.13. OADP Monitoring The OpenShift Container Platform provides a monitoring stack that allows users and administrators to effectively monitor and manage their clusters, as well as monitor and analyze the workload performance of user applications and services running on the clusters, including receiving alerts if an event occurs. Additional resources Monitoring stack 4.10.13.1. OADP monitoring setup The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end. With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics. Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have created a cluster monitoring config map. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring Add or enable the enableUserWorkload option in the data section's config.yaml field: apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata: # ... 1 Add this option or set to true Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the openshift-user-workload-monitoring namespace: USD oc get pods -n openshift-user-workload-monitoring Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s Verify the existence of the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring . If it exists, skip the remaining steps in this procedure. USD oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring Example output Error from server (NotFound): configmaps "user-workload-monitoring-config" not found Create a user-workload-monitoring-config ConfigMap object for the User Workload Monitoring, and save it under the 2_configure_user_workload_monitoring.yaml file name: Example output apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the 2_configure_user_workload_monitoring.yaml file: USD oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created 4.10.13.2. Creating OADP service monitor OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service. Get details about the service by running the following commands: Procedure Ensure the openshift-adp-velero-metrics-svc service exists. It should contain app.kubernetes.io/name=velero label, which will be used as selector for the ServiceMonitor object. USD oc get svc -n openshift-adp -l app.kubernetes.io/name=velero Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h Create a ServiceMonitor YAML file that matches the existing service label, and save the file as 3_create_oadp_service_monitor.yaml . The service monitor is created in the openshift-adp namespace where the openshift-adp-velero-metrics-svc service resides. Example ServiceMonitor object apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: "velero" Apply the 3_create_oadp_service_monitor.yaml file: USD oc apply -f 3_create_oadp_service_monitor.yaml Example output servicemonitor.monitoring.coreos.com/oadp-service-monitor created Verification Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OpenShift Container Platform web console: Navigate to the Observe Targets page. Ensure the Filter is unselected or that the User source is selected and type openshift-adp in the Text search field. Verify that the status for the Status for the service monitor is Up . Figure 4.1. OADP metrics targets 4.10.13.3. Creating an alerting rule The OpenShift Container Platform monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring. Procedure Create a PrometheusRule YAML file with the sample OADPBackupFailing alert and save it as 4_create_oadp_alert_rule.yaml . Sample OADPBackupFailing alert apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job="openshift-adp-velero-metrics-svc"}[2h]) > 0 for: 5m labels: severity: warning In this sample, the Alert displays under the following conditions: There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes. If the time of the first increase is less than 5 minutes, the Alert will be in a Pending state, after which it will turn into a Firing state. Apply the 4_create_oadp_alert_rule.yaml file, which creates the PrometheusRule object in the openshift-adp namespace: USD oc apply -f 4_create_oadp_alert_rule.yaml Example output prometheusrule.monitoring.coreos.com/sample-oadp-alert created Verification After the Alert is triggered, you can view it in the following ways: In the Developer perspective, select the Observe menu. In the Administrator perspective under the Observe Alerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed. Figure 4.2. OADP backup failing alert Additional resources Managing alerts 4.10.13.4. List of available metrics These are the list of metrics provided by the OADP together with their Types . Metric name Description Type kopia_content_cache_hit_bytes Number of bytes retrieved from the cache Counter kopia_content_cache_hit_count Number of times content was retrieved from the cache Counter kopia_content_cache_malformed Number of times malformed content was read from the cache Counter kopia_content_cache_miss_count Number of times content was not found in the cache and fetched Counter kopia_content_cache_missed_bytes Number of bytes retrieved from the underlying storage Counter kopia_content_cache_miss_error_count Number of times content could not be found in the underlying storage Counter kopia_content_cache_store_error_count Number of times content could not be saved in the cache Counter kopia_content_get_bytes Number of bytes retrieved using GetContent() Counter kopia_content_get_count Number of times GetContent() was called Counter kopia_content_get_error_count Number of times GetContent() was called and the result was an error Counter kopia_content_get_not_found_count Number of times GetContent() was called and the result was not found Counter kopia_content_write_bytes Number of bytes passed to WriteContent() Counter kopia_content_write_count Number of times WriteContent() was called Counter velero_backup_attempt_total Total number of attempted backups Counter velero_backup_deletion_attempt_total Total number of attempted backup deletions Counter velero_backup_deletion_failure_total Total number of failed backup deletions Counter velero_backup_deletion_success_total Total number of successful backup deletions Counter velero_backup_duration_seconds Time taken to complete backup, in seconds Histogram velero_backup_failure_total Total number of failed backups Counter velero_backup_items_errors Total number of errors encountered during backup Gauge velero_backup_items_total Total number of items backed up Gauge velero_backup_last_status Last status of the backup. A value of 1 is success, 0. Gauge velero_backup_last_successful_timestamp Last time a backup ran successfully, Unix timestamp in seconds Gauge velero_backup_partial_failure_total Total number of partially failed backups Counter velero_backup_success_total Total number of successful backups Counter velero_backup_tarball_size_bytes Size, in bytes, of a backup Gauge velero_backup_total Current number of existent backups Gauge velero_backup_validation_failure_total Total number of validation failed backups Counter velero_backup_warning_total Total number of warned backups Counter velero_csi_snapshot_attempt_total Total number of CSI attempted volume snapshots Counter velero_csi_snapshot_failure_total Total number of CSI failed volume snapshots Counter velero_csi_snapshot_success_total Total number of CSI successful volume snapshots Counter velero_restore_attempt_total Total number of attempted restores Counter velero_restore_failed_total Total number of failed restores Counter velero_restore_partial_failure_total Total number of partially failed restores Counter velero_restore_success_total Total number of successful restores Counter velero_restore_total Current number of existent restores Gauge velero_restore_validation_failed_total Total number of failed restores failing validations Counter velero_volume_snapshot_attempt_total Total number of attempted volume snapshots Counter velero_volume_snapshot_failure_total Total number of failed volume snapshots Counter velero_volume_snapshot_success_total Total number of successful volume snapshots Counter 4.10.13.5. Viewing metrics using the Observe UI You can view metrics in the OpenShift Container Platform web console from the Administrator or Developer perspective, which must have access to the openshift-adp project. Procedure Navigate to the Observe Metrics page: If you are using the Developer perspective, follow these steps: Select Custom query , or click on the Show PromQL link. Type the query and click Enter . If you are using the Administrator perspective, type the expression in the text field and select Run Queries . Figure 4.3. OADP metrics query 4.11. APIs used with OADP The document provides information about the following APIs that you can use with OADP: Velero API OADP API 4.11.1. Velero API Velero API documentation is maintained by Velero, not by Red Hat. It can be found at Velero API types . 4.11.2. OADP API The following tables provide the structure of the OADP API: Table 4.2. DataProtectionApplicationSpec Property Type Description backupLocations [] BackupLocation Defines the list of configurations to use for BackupStorageLocations . snapshotLocations [] SnapshotLocation Defines the list of configurations to use for VolumeSnapshotLocations . unsupportedOverrides map [ UnsupportedImageKey ] string Can be used to override the deployed dependent images for development. Options are veleroImageFqin , awsPluginImageFqin , openshiftPluginImageFqin , azurePluginImageFqin , gcpPluginImageFqin , csiPluginImageFqin , dataMoverImageFqin , resticRestoreImageFqin , kubevirtPluginImageFqin , and operator-type . podAnnotations map [ string ] string Used to add annotations to pods deployed by Operators. podDnsPolicy DNSPolicy Defines the configuration of the DNS of a pod. podDnsConfig PodDNSConfig Defines the DNS parameters of a pod in addition to those generated from DNSPolicy . backupImages * bool Used to specify whether or not you want to deploy a registry for enabling backup and restore of images. configuration * ApplicationConfig Used to define the data protection application's server configuration. features * Features Defines the configuration for the DPA to enable the Technology Preview features. Complete schema definitions for the OADP API . Table 4.3. BackupLocation Property Type Description velero * velero.BackupStorageLocationSpec Location to store volume snapshots, as described in Backup Storage Location . bucket * CloudStorageLocation [Technology Preview] Automates creation of a bucket at some cloud storage providers for use as a backup storage location. Important The bucket parameter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Complete schema definitions for the type BackupLocation . Table 4.4. SnapshotLocation Property Type Description velero * VolumeSnapshotLocationSpec Location to store volume snapshots, as described in Volume Snapshot Location . Complete schema definitions for the type SnapshotLocation . Table 4.5. ApplicationConfig Property Type Description velero * VeleroConfig Defines the configuration for the Velero server. restic * ResticConfig Defines the configuration for the Restic server. Complete schema definitions for the type ApplicationConfig . Table 4.6. VeleroConfig Property Type Description featureFlags [] string Defines the list of features to enable for the Velero instance. defaultPlugins [] string The following types of default Velero plugins can be installed: aws , azure , csi , gcp , kubevirt , and openshift . customPlugins [] CustomPlugin Used for installation of custom Velero plugins. Default and custom plugins are described in OADP plugins restoreResourcesVersionPriority string Represents a config map that is created if defined for use in conjunction with the EnableAPIGroupVersions feature flag. Defining this field automatically adds EnableAPIGroupVersions to the Velero server feature flag. noDefaultBackupLocation bool To install Velero without a default backup storage location, you must set the noDefaultBackupLocation flag in order to confirm installation. podConfig * PodConfig Defines the configuration of the Velero pod. logLevel string Velero server's log level (use debug for the most granular logging, leave unset for Velero default). Valid options are trace , debug , info , warning , error , fatal , and panic . Complete schema definitions for the type VeleroConfig . Table 4.7. CustomPlugin Property Type Description name string Name of custom plugin. image string Image of custom plugin. Complete schema definitions for the type CustomPlugin . Table 4.8. ResticConfig Property Type Description enable * bool If set to true , enables backup and restore using Restic. If set to false , snapshots are needed. supplementalGroups [] int64 Defines the Linux groups to be applied to the Restic pod. timeout string A user-supplied duration string that defines the Restic timeout. Default value is 1hr (1 hour). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . podConfig * PodConfig Defines the configuration of the Restic pod. Complete schema definitions for the type ResticConfig . Table 4.9. PodConfig Property Type Description nodeSelector map [ string ] string Defines the nodeSelector to be supplied to a Velero podSpec or a Restic podSpec . tolerations [] Toleration Defines the list of tolerations to be applied to a Velero deployment or a Restic daemonset . resourceAllocations ResourceRequirements Set specific resource limits and requests for a Velero pod or a Restic pod as described in Setting Velero CPU and memory resource allocations . labels map [ string ] string Labels to add to pods. Complete schema definitions for the type PodConfig . Table 4.10. Features Property Type Description dataMover * DataMover Defines the configuration of the Data Mover. Complete schema definitions for the type Features . Table 4.11. DataMover Property Type Description enable bool If set to true , deploys the volume snapshot mover controller and a modified CSI Data Mover plugin. If set to false , these are not deployed. credentialName string User-supplied Restic Secret name for Data Mover. timeout string A user-supplied duration string for VolumeSnapshotBackup and VolumeSnapshotRestore to complete. Default is 10m (10 minutes). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . The OADP API is more fully detailed in OADP Operator . 4.12. Advanced OADP features and functionalities This document provides information about advanced features and functionalities of OpenShift API for Data Protection (OADP). 4.12.1. Working with different Kubernetes API versions on the same cluster 4.12.1.1. Listing the Kubernetes API group versions on a cluster A source cluster might offer multiple versions of an API, where one of these versions is the preferred API version. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups. If you use Velero to back up and restore such a source cluster, Velero backs up only the version of that resource that uses the preferred version of its Kubernetes API. To return to the above example, if example.com/v1 is the preferred API, then Velero only backs up the version of a resource that uses example.com/v1 . Moreover, the target cluster needs to have example.com/v1 registered in its set of available API resources in order for Velero to restore the resource on the target cluster. Therefore, you need to generate a list of the Kubernetes API group versions on your target cluster to be sure the preferred API version is registered in its set of available API resources. Procedure Enter the following command: USD oc api-resources 4.12.1.2. About Enable API Group Versions By default, Velero only backs up resources that use the preferred version of the Kubernetes API. However, Velero also includes a feature, Enable API Group Versions , that overcomes this limitation. When enabled on the source cluster, this feature causes Velero to back up all Kubernetes API group versions that are supported on the cluster, not only the preferred one. After the versions are stored in the backup .tar file, they are available to be restored on the destination cluster. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups, with example.com/v1 being the preferred API. Without the Enable API Group Versions feature enabled, Velero backs up only the preferred API group version for Example , which is example.com/v1 . With the feature enabled, Velero also backs up example.com/v1beta2 . When the Enable API Group Versions feature is enabled on the destination cluster, Velero selects the version to restore on the basis of the order of priority of API group versions. Note Enable API Group Versions is still in beta. Velero uses the following algorithm to assign priorities to API versions, with 1 as the top priority: Preferred version of the destination cluster Preferred version of the source_ cluster Common non-preferred supported version with the highest Kubernetes version priority Additional resources Enable API Group Versions Feature 4.12.1.3. Using Enable API Group Versions You can use Velero's Enable API Group Versions feature to back up all Kubernetes API group versions that are supported on a cluster, not only the preferred one. Note Enable API Group Versions is still in beta. Procedure Configure the EnableAPIGroupVersions feature flag: apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication ... spec: configuration: velero: featureFlags: - EnableAPIGroupVersions Additional resources Enable API Group Versions Feature 4.12.2. Backing up data from one cluster and restoring it to another cluster 4.12.2.1. About backing up data from one cluster and restoring it on another cluster OpenShift API for Data Protection (OADP) is designed to back up and restore application data in the same OpenShift Container Platform cluster. Migration Toolkit for Containers (MTC) is designed to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster. You can use OADP to back up application data from one OpenShift Container Platform cluster and restore it on another cluster. However, doing so is more complicated than using MTC or using OADP to back up and restore on the same cluster. To successfully use OADP to back up data from one cluster and restore it to another cluster, you must take into account the following factors, in addition to the prerequisites and procedures that apply to using OADP to back up and restore data on the same cluster: Operators Use of Velero UID and GID ranges 4.12.2.1.1. Operators You must exclude Operators from the backup of an application for backup and restore to succeed. 4.12.2.1.2. Use of Velero Velero, which OADP is built upon, does not natively support migrating persistent volume snapshots across cloud providers. To migrate volume snapshot data between cloud platforms, you must either enable the Velero Restic file system backup option, which backs up volume contents at the file system level, or use the OADP Data Mover for CSI snapshots. Note In OADP 1.1 and earlier, the Velero Restic file system backup option is called restic . In OADP 1.2 and later, the Velero Restic file system backup option is called file-system-backup . You must also use Velero's File System Backup to migrate data between AWS regions or between Microsoft Azure regions. Velero does not support restoring data to a cluster with an earlier Kubernetes version than the source cluster. It is theoretically possible to migrate workloads to a destination with a later Kubernetes version than the source, but you must consider the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core or native API groups, you must first update the impacted custom resources. 4.12.2.2. About determining which pod volumes to back up Before you start a backup operation by using File System Backup (FSB), you must specify which pods contain a volume that you want to back up. Velero refers to this process as "discovering" the appropriate pod volumes. Velero supports two approaches for determining pod volumes: Opt-in approach : The opt-in approach requires that you actively indicate that you want to include - opt-in - a volume in a backup. You do this by labelling each pod that contains a volume to be backed up with the name of the volume. When Velero finds a persistent volume (PV), it checks the pod that mounted the volume. If the pod is labelled with the name of the volume, Velero backs up the pod. Opt-out approach : With the opt-out approach, you must actively specify that you want to exclude a volume from a backup. You do this by labelling each pod that contains a volume you do not want to back up with the name of the volume. When Velero finds a PV, it checks the pod that mounted the volume. If the pod is labelled with the volume's name, Velero does not back up the pod. 4.12.2.2.1. Limitations FSB does not support backing up and restoring hostpath volumes. However, FSB does support backing up and restoring local volumes. Velero uses a static, common encryption key for all backup repositories it creates. This static key means that anyone who can access your backup storage can also decrypt your backup data . It is essential that you limit access to backup storage. For PVCs, every incremental backup chain is maintained across pod reschedules. For pod volumes that are not PVCs, such as emptyDir volumes, if a pod is deleted or recreated, for example, by a ReplicaSet or a deployment, the backup of those volumes will be a full backup and not an incremental backup. It is assumed that the lifecycle of a pod volume is defined by its pod. Even though backup data can be kept incrementally, backing up large files, such as a database, can take a long time. This is because FSB uses deduplication to find the difference that needs to be backed up. FSB reads and writes data from volumes by accessing the file system of the node on which the pod is running. For this reason, FSB can only back up volumes that are mounted from a pod and not directly from a PVC. Some Velero users have overcome this limitation by running a staging pod, such as a BusyBox or Alpine container with an infinite sleep, to mount these PVC and PV pairs before performing a Velero backup.. FSB expects volumes to be mounted under <hostPath>/<pod UID> , with <hostPath> being configurable. Some Kubernetes systems, for example, vCluster, do not mount volumes under the <pod UID> subdirectory, and VFSB does not work with them as expected. 4.12.2.2.2. Backing up pod volumes by using the opt-in method You can use the opt-in method to specify which volumes need to be backed up by File System Backup (FSB). You can do this by using the backup.velero.io/backup-volumes command. Procedure On each pod that contains one or more volumes that you want to back up, enter the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. 4.12.2.2.3. Backing up pod volumes by using the opt-out method When using the opt-out approach, all pod volumes are backed up by using File System Backup (FSB), although there are some exceptions: Volumes that mount the default service account token, secrets, and configuration maps. hostPath volumes You can use the opt-out method to specify which volumes not to back up. You can do this by using the backup.velero.io/backup-volumes-excludes command. Procedure On each pod that contains one or more volumes that you do not want to back up, run the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. Note You can enable this behavior for all Velero backups by running the velero install command with the --default-volumes-to-fs-backup flag. 4.12.2.3. UID and GID ranges If you back up data from one cluster and restore it to another cluster, problems might occur with UID (User ID) and GID (Group ID) ranges. The following section explains these potential issues and mitigations: Summary of the issues The namespace UID and GID ranges might change depending on the destination cluster. OADP does not back up and restore OpenShift UID range metadata. If the backed up application requires a specific UID, ensure the range is availableupon restore. For more information about OpenShift's UID and GID ranges, see A Guide to OpenShift and UIDs . Detailed description of the issues When you create a namespace in OpenShift Container Platform by using the shell command oc create namespace , OpenShift Container Platform assigns the namespace a unique User ID (UID) range from its available pool of UIDs, a Supplemental Group (GID) range, and unique SELinux MCS labels. This information is stored in the metadata.annotations field of the cluster. This information is part of the Security Context Constraints (SCC) annotations, which comprise of the following components: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range When you use OADP to restore the namespace, it automatically uses the information in metadata.annotations without resetting it for the destination cluster. As a result, the workload might not have access to the backed up data if any of the following is true: There is an existing namespace with other SCC annotations, for example, on another cluster. In this case, OADP uses the existing namespace during the backup instead of the namespace you want to restore. A label selector was used during the backup, but the namespace in which the workloads are executed does not have the label. In this case, OADP does not back up the namespace, but creates a new namespace during the restore that does not contain the annotations of the backed up namespace. This results in a new UID range being assigned to the namespace. This can be an issue for customer workloads if OpenShift Container Platform assigns a pod a securityContext UID to a pod based on namespace annotations that have changed since the persistent volume data was backed up. The UID of the container no longer matches the UID of the file owner. An error occurs because OpenShift Container Platform has not changed the UID range of the destination cluster to match the backup cluster data. As a result, the backup cluster has a different UID than the destination cluster, which means that the application cannot read or write data on the destination cluster. Mitigations You can use one or more of the following mitigations to resolve the UID and GID range issues: Simple mitigations: If you use a label selector in the Backup CR to filter the objects to include in the backup, be sure to add this label selector to the namespace that contains the workspace. Remove any pre-existing version of a namespace on the destination cluster before attempting to restore a namespace with the same name. Advanced mitigations: Fix UID ranges after migration by Resolving overlapping UID ranges in OpenShift namespaces after migration . Step 1 is optional. For an in-depth discussion of UID and GID ranges in OpenShift Container Platform with an emphasis on overcoming issues in backing up data on one cluster and restoring it on another, see A Guide to OpenShift and UIDs . 4.12.2.4. Backing up data from one cluster and restoring it to another cluster In general, you back up data from one OpenShift Container Platform cluster and restore it on another OpenShift Container Platform cluster in the same way that you back up and restore data to the same cluster. However, there are some additional prerequisites and differences in the procedure when backing up data from one OpenShift Container Platform cluster and restoring it on another. Prerequisites All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide. Procedure Make the following additions to the procedures given for your platform: Ensure that the backup store location (BSL) and volume snapshot location have the same names and paths to restore resources to another cluster. Share the same object storage location credentials across the clusters. For best results, use OADP to create the namespace on the destination cluster. If you use the Velero file-system-backup option, enable the --default-volumes-to-fs-backup flag for use during backup by running the following command: USD velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options> Note In OADP 1.2 and later, the Velero Restic option is called file-system-backup . 4.12.3. Additional resources For more information about API group versions, see Working with different Kubernetes API versions on the same cluster . For more information about OADP Data Mover, see Using Data Mover for CSI snapshots . For more information about using Restic with OADP, see Backing up applications with Restic .
[ "time=\"2022-11-23T15:40:46Z\" level=info msg=\"1 errors encountered backup up item\" backup=openshift-adp/django-persistent-67a5b83d-6b44-11ed-9cba-902e163f806c logSource=\"/remote-source/velero/app/pkg/backup/backup.go:413\" name=django-psql-persistent time=\"2022-11-23T15:40:46Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/django-persistent-67a5b83d-6b44-11ed-9cba-902e163f8", "admission webhook \"clusterrolebindings-validation.managed.openshift.io\" denied the request: Deleting ClusterRoleBinding must-gather-p7vwj is not allowed", "oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers='true'", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin", "resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift 1 - aws resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 5 prefix: <prefix> 6 config: region: <region> profile: \"default\" credential: key: cloud name: cloud-credentials 7 snapshotLocations: 8 - name: default velero: provider: aws config: region: <region> 9 profile: \"default\"", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name USDAZURE_STORAGE_ACCOUNT_ID --query \"[?keyName == 'key1'].value\" -o tsv`", "AZURE_ROLE=Velero az role definition create --role-definition '{ \"Name\": \"'USDAZURE_ROLE'\", \"Description\": \"Velero related permissions to perform backups, restores and deletions\", \"Actions\": [ \"Microsoft.Compute/disks/read\", \"Microsoft.Compute/disks/write\", \"Microsoft.Compute/disks/endGetAccess/action\", \"Microsoft.Compute/disks/beginGetAccess/action\", \"Microsoft.Compute/snapshots/read\", \"Microsoft.Compute/snapshots/write\", \"Microsoft.Compute/snapshots/delete\", \"Microsoft.Storage/storageAccounts/listkeys/action\", \"Microsoft.Storage/storageAccounts/regeneratekey/action\" ], \"AssignableScopes\": [\"/subscriptions/'USDAZURE_SUBSCRIPTION_ID'\"] }'", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_STORAGE_ACCOUNT_ACCESS_KEY=USD{AZURE_STORAGE_ACCOUNT_ACCESS_KEY} 1 AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - azure - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 5 storageAccount: <azure_storage_account_id> 6 subscriptionId: <azure_subscription_id> 7 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 8 provider: azure default: true objectStorage: bucket: <bucket_name> 9 prefix: <prefix> 10 snapshotLocations: 11 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: provider: gcp default: true credential: key: cloud name: cloud-credentials-gcp 5 objectStorage: bucket: <bucket_name> 6 prefix: <prefix> 7 snapshotLocations: 8 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 9", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: minio s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: config: profile: \"default\" region: minio s3Url: <url> 5 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 6 objectStorage: bucket: <bucket_name> 7 prefix: <prefix> 8", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "oc get backupStorageLocations -n openshift-adp", "NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app=<label_1> app=<label_2> app=<label_3> orLabelSelectors: 6 - matchLabels: app=<label_1> app=<label_2> app=<label_3>", "oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" driver: <csi_driver> deletionPolicy: Retain", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToRestic: true 1", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11", "oc get backupStorageLocations -n openshift-adp", "NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m", "cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToRestic: true 4 ttl: 720h0m0s EOF", "oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'", "oc delete backup <backup_CR_name> -n <velero_namespace>", "velero backup delete <backup_CR_name> -n <velero_namespace>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3", "oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'", "oc get all -n <namespace> 1", "bash dc-restic-post-restore.sh <restore-name>", "#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } OADP_NAMESPACE=USD{OADP_NAMESPACE:=openshift-adp} if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo using OADP Namespace USDOADP_NAMESPACE echo restore: USD1 label=USD(label_name USD1) echo label: USDlabel echo Deleting disconnected restore pods delete pods -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-adp type: Opaque stringData: RESTIC_PASSWORD: <secure_restic_password>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: <bucket-prefix> provider: aws configuration: restic: enable: <true_or_false> velero: itemOperationSyncFrequency: \"10s\" defaultPlugins: - openshift - aws - csi - vsm 1 features: dataMover: credentialName: restic-secret enable: true maxConcurrentBackupVolumes: \"3\" 2 maxConcurrentRestoreVolumes: \"3\" 3 pruneInterval: \"14\" 4 volumeOptions: 5 sourceVolumeOptions: accessMode: ReadOnlyMany cacheAccessMode: ReadWriteOnce cacheCapacity: 2Gi destinationVolumeOptions: storageClass: other-storageclass-name cacheAccessMode: ReadWriteMany snapshotLocations: - velero: config: profile: default region: us-west-2 provider: aws", "apiVersion: datamover.oadp.openshift.io/v1alpha1 kind: VolumeSnapshotBackup metadata: name: <vsb_name> namespace: <namespace_name> 1 spec: volumeSnapshotContent: name: <snapcontent_name> protectedNamespace: <adp_namespace> 2 resticSecretRef: name: <restic_secret_name>", "apiVersion: datamover.oadp.openshift.io/v1alpha1 kind: VolumeSnapshotRestore metadata: name: <vsr_name> namespace: <namespace_name> 1 spec: protectedNamespace: <protected_ns> 2 resticSecretRef: name: <restic_secret_name> volumeSnapshotMoverBackupRef: sourcePVCData: name: <source_pvc_name> size: <source_pvc_size> resticrepository: <your_restic_repo> volumeSnapshotClassName: <vsclass_name>", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> 1 spec: includedNamespaces: - <app_ns> 2 storageLocation: velero-sample-1", "oc get vsb -n <app_ns>", "oc get vsb <vsb_name> -n <app_ns> -o jsonpath=\"{.status.phase}\"", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name> restorePVs: true", "oc get vsr -n <app_ns>", "oc get vsr <vsr_name> -n <app_ns> -o jsonpath=\"{.status.phase}\"", "apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain 1 driver: openshift-storage.cephfs.csi.ceph.com kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 labels: velero.io/csi-volumesnapshot-class: true 3 name: ocs-storagecluster-cephfsplugin-snapclass parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ocs-storagecluster-cephfs annotations: description: Provides RWO and RWX Filesystem volumes storageclass.kubernetes.io/is-default-class: true 1 provisioner: openshift-storage.cephfs.csi.ceph.com parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate", "apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain 1 driver: openshift-storage.rbd.csi.ceph.com kind: VolumeSnapshotClass metadata: labels: velero.io/csi-volumesnapshot-class: true 2 name: ocs-storagecluster-rbdplugin-snapclass parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ocs-storagecluster-ceph-rbd annotations: description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes' provisioner: openshift-storage.rbd.csi.ceph.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner imageFormat: '2' clusterID: openshift-storage imageFeatures: layering csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage pool: ocs-storagecluster-cephblockpool csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ocs-storagecluster-cephfs-shallow annotations: description: Provides RWO and RWX Filesystem volumes storageclass.kubernetes.io/is-default-class: false provisioner: openshift-storage.cephfs.csi.ceph.com parameters: csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage backingSnapshot: true 1 csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: <namespace> type: Opaque stringData: RESTIC_PASSWORD: <restic_password>", "oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{\"Name: \"}{.metadata.name}{\" \"}{\"Retention Policy: \"}{.deletionPolicy}{\"\\n\"}{end}'", "oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{\"Name: \"}{.metadata.name}{\" \"}{\"labels: \"}{.metadata.labels}{\"\\n\"}{end}'", "oc get storageClass -A -o jsonpath='{range .items[*]}{\"Name: \"}{.metadata.name}{\" \"}{\"annotations: \"}{.metadata.annotations}{\"\\n\"}{end}'", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <my_bucket> prefix: velero provider: aws configuration: restic: enable: false 1 velero: defaultPlugins: - openshift - aws - csi - vsm features: dataMover: credentialName: <restic_secret_name> 2 enable: true 3 volumeOptionsForStorageClasses: ocs-storagecluster-cephfs: sourceVolumeOptions: accessMode: ReadOnlyMany cacheAccessMode: ReadWriteMany cacheStorageClassName: ocs-storagecluster-cephfs storageClassName: ocs-storagecluster-cephfs-shallow", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> spec: includedNamespaces: - <app_ns> storageLocation: velero-sample-1", "oc get vsb -n <app_ns>", "oc get vsb <vsb_name> -n <app_ns> -ojsonpath=\"{.status.phase}`", "oc delete vsb -n <app_namespace> --all", "oc delete volumesnapshotcontent --all", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name>", "oc get vsr -n <app_ns>", "oc get vsr <vsr_name> -n <app_ns> -ojsonpath=\"{.status.phase}", "oc get route <route_name> -n <app_ns> -ojsonpath=\"{.spec.host}\"", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <my-bucket> prefix: velero provider: aws configuration: restic: enable: false velero: defaultPlugins: - openshift - aws - csi - vsm features: dataMover: credentialName: <restic_secret_name> 1 enable: true volumeOptionsForStorageClasses: 2 ocs-storagecluster-cephfs: sourceVolumeOptions: accessMode: ReadOnlyMany cacheAccessMode: ReadWriteMany cacheStorageClassName: ocs-storagecluster-cephfs storageClassName: ocs-storagecluster-cephfs-shallow ocs-storagecluster-ceph-rbd: sourceVolumeOptions: storageClassName: ocs-storagecluster-ceph-rbd cacheStorageClassName: ocs-storagecluster-ceph-rbd destinationVolumeOptions: storageClassName: ocs-storagecluster-ceph-rbd cacheStorageClassName: ocs-storagecluster-ceph-rbd", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> spec: includedNamespaces: - <app_ns> storageLocation: velero-sample-1", "oc get vsb -n <app_ns>", "oc get vsb <vsb_name> -n <app_ns> -ojsonpath=\"{.status.phase}`", "oc delete vsb -n <app_namespace> --all", "oc delete volumesnapshotcontent --all", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name>", "oc get vsr -n <app_ns>", "oc get vsr <vsr_name> -n <app_ns> -ojsonpath=\"{.status.phase}", "oc get route <route_name> -n <app_ns> -ojsonpath=\"{.spec.host}\"", "oc delete vsb -n <app_namespace> --all", "oc delete vsr -n <app_namespace> --all", "oc delete volumesnapshotcontent --all", "oc delete vsb -n <app_namespace> --all", "oc delete volumesnapshot -A --all", "oc delete volumesnapshotcontent --all", "oc delete pvc -n <protected_namespace> --all", "oc delete replicationsource -n <protected_namespace> --all", "oc delete vsr -n <app-ns> --all", "oc delete volumesnapshot -A --all", "oc delete volumesnapshotcontent --all", "oc delete replicationdestination -n <protected_namespace> --all", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3", "kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 1 storageLocation: default ttl: 720h0m0s volumeSnapshotLocations: - dpa-sample-1", "oc create -f backup.yaml", "oc get datauploads -A", "NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal", "oc get datauploads <dataupload_name> -o yaml", "apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"", "apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>", "oc create -f restore.yaml", "oc get datadownloads -A", "NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal", "oc get datadownloads <datadownload_name> -o yaml", "apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"", "alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'", "oc describe <velero_cr> <cr_name>", "oc logs pod/<velero>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi", "requests: cpu: 500m memory: 128Mi", "velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev", "oc get mutatingwebhookconfigurations", "[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "oc get backupstoragelocation -A", "velero backup-location get -n <OADP_Operator_namespace>", "oc get backupstoragelocation -n <namespace> -o yaml", "apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: restic: timeout: 1h", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h", "oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>", "oc delete backup <backup> -n openshift-adp", "time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq", "oc delete backup <backup> -n openshift-adp", "oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true", "spec: configuration: restic: enable: true supplementalGroups: - <group_id> 1", "oc delete resticrepository openshift-adp <name_of_the_restic_repository>", "time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1 -- /usr/bin/gather_metrics_dump", "oc adm must-gather --image=brew.registry.redhat.io/rh-osbs/oadp-oadp-mustgather-rhel8:1.1.1-8 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:", "oc get pods -n openshift-user-workload-monitoring", "NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s", "oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring", "Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |", "oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created", "oc get svc -n openshift-adp -l app.kubernetes.io/name=velero", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"", "oc apply -f 3_create_oadp_service_monitor.yaml", "servicemonitor.monitoring.coreos.com/oadp-service-monitor created", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning", "oc apply -f 4_create_oadp_alert_rule.yaml", "prometheusrule.monitoring.coreos.com/sample-oadp-alert created", "oc api-resources", "apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions", "oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>", "oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>", "velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/backup_and_restore/oadp-application-backup-and-restore
Chapter 4. File Systems
Chapter 4. File Systems Support of XFS File System The default file system for an Anaconda -based installation of Red Hat Enterprise Linux 7 is now XFS , which replaces the Fourth Extended Filesystem ( ext4 ) used by default in Red Hat Enterprise Linux 6. The ext4 , ext3 and ext2 file systems can be used as alternatives to XFS . XFS is a highly scalable, high-performance file system which was originally designed at Silicon Graphics, Inc. It was created to support file systems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes) and directory structures containing tens of millions of entries. XFS supports metadata journaling, which facilitates quicker crash recovery. XFS file system can also be defragmented and expanded while mounted and active. Note that it is not possible to shrink XFS file system. For information about changes between commands used for common tasks in ext4 and XFS , see the Reference Table in the Installation Guide . Support of Btrfs File System The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it allows for compression and integrated device management. For more information about the Btrfs Technology Preview, see Storage Administration Guide Fast Block Devices Caching Slower Block Devices LVM provides the ability to have fast block devices act as a cache for slower block devices. This feature is introduced as a Technology Preview in Red Hat Enterprise Linux 7 and allows a PCIe SSD device to act as a cache for direct-attached storage (DAS) or storage area network (SAN) storage, which improves file system performance. For more information, refer to the LVM Cache entry in Chapter 3, Storage and the lvm(8) manual page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-file_systems
Chapter 80. zone
Chapter 80. zone This chapter describes the commands under the zone command. 80.1. zone abandon Abandon a zone Usage: Table 80.1. Positional arguments Value Summary id Zone id Table 80.2. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.2. zone axfr AXFR a zone Usage: Table 80.3. Positional arguments Value Summary id Zone id Table 80.4. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.3. zone blacklist create Create new blacklist Usage: Table 80.5. Command arguments Value Summary -h, --help Show this help message and exit --pattern PATTERN Blacklist pattern --description DESCRIPTION Description --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.6. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.7. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.8. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.9. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.4. zone blacklist delete Delete blacklist Usage: Table 80.10. Positional arguments Value Summary id Blacklist id Table 80.11. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.5. zone blacklist list List blacklists Usage: Table 80.12. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.13. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.14. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.6. zone blacklist set Set blacklist properties Usage: Table 80.17. Positional arguments Value Summary id Blacklist id Table 80.18. Command arguments Value Summary -h, --help Show this help message and exit --pattern PATTERN Blacklist pattern --description DESCRIPTION Description --no-description- all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.7. zone blacklist show Show blacklist details Usage: Table 80.23. Positional arguments Value Summary id Blacklist id Table 80.24. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.25. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.27. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.28. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.8. zone create Create new zone Usage: Table 80.29. Positional arguments Value Summary name Zone name Table 80.30. Command arguments Value Summary -h, --help Show this help message and exit --email EMAIL Zone email --type {PRIMARY,SECONDARY} Zone type --ttl TTL Time to live (seconds) --description DESCRIPTION Description --masters MASTERS [MASTERS ... ] Zone masters --attributes ATTRIBUTES [ATTRIBUTES ... ] Zone attributes --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.9. zone delete Delete zone Usage: Table 80.35. Positional arguments Value Summary id Zone id Table 80.36. Command arguments Value Summary -h, --help Show this help message and exit --delete-shares Delete existing zone shares. default: false --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --hard-delete Delete zone along-with backend zone resources (i.e. files). Default: False Table 80.37. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.38. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.39. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.40. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.10. zone export create Export a Zone Usage: Table 80.41. Positional arguments Value Summary zone_id Zone id Table 80.42. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.43. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.44. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.45. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.46. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.11. zone export delete Delete a Zone Export Usage: Table 80.47. Positional arguments Value Summary zone_export_id Zone export id Table 80.48. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.12. zone export list List Zone Exports Usage: Table 80.49. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.50. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.51. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.52. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.53. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.13. zone export show Show a Zone Export Usage: Table 80.54. Positional arguments Value Summary zone_export_id Zone export id Table 80.55. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.56. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.57. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.58. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.59. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.14. zone export showfile Show the zone file for the Zone Export Usage: Table 80.60. Positional arguments Value Summary zone_export_id Zone export id Table 80.61. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.15. zone import create Import a Zone from a file on the filesystem Usage: Table 80.66. Positional arguments Value Summary zone_file_path Path to a zone file Table 80.67. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.68. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.69. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.70. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.71. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.16. zone import delete Delete a Zone Import Usage: Table 80.72. Positional arguments Value Summary zone_import_id Zone import id Table 80.73. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.17. zone import list List Zone Imports Usage: Table 80.74. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.75. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.76. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.77. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.18. zone import show Show a Zone Import Usage: Table 80.79. Positional arguments Value Summary zone_import_id Zone import id Table 80.80. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.81. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.83. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.84. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.19. zone list List zones Usage: Table 80.85. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Zone name --email EMAIL Zone email --type {PRIMARY,SECONDARY} Zone type --ttl TTL Time to live (seconds) --description DESCRIPTION Description --status STATUS Zone status --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.86. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.87. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.88. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.89. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.20. zone set Set zone properties Usage: Table 80.90. Positional arguments Value Summary id Zone id Table 80.91. Command arguments Value Summary -h, --help Show this help message and exit --email EMAIL Zone email --ttl TTL Time to live (seconds) --description DESCRIPTION Description --no-description- masters MASTERS [MASTERS ... ] Zone masters --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.92. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.93. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.94. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.21. zone share create Share a Zone Usage: Table 80.96. Positional arguments Value Summary zone The zone name or id to share. target_project_id Target project id to share the zone with. Table 80.97. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.98. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.99. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.100. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.101. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.22. zone share delete Delete a Zone Share Usage: Table 80.102. Positional arguments Value Summary zone The zone name or id to share. shared_zone_id The zone share id to delete. Table 80.103. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.23. zone share list List Zone Shares Usage: Table 80.104. Positional arguments Value Summary zone The zone name or id to share. Table 80.105. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --target-project-id TARGET_PROJECT_ID The target project id to filter on. Table 80.106. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.107. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.108. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.24. zone share show Show Zone Share Details Usage: Table 80.110. Positional arguments Value Summary zone The zone name or id to share. shared_zone_id The zone share id to show. Table 80.111. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.112. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.113. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.114. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.115. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.25. zone show Show zone details Usage: Table 80.116. Positional arguments Value Summary id Zone id Table 80.117. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.118. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.119. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.120. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.121. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.26. zone transfer accept list List Zone Transfer Accepts Usage: Table 80.122. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.123. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.124. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.125. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.126. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.27. zone transfer accept request Accept a Zone Transfer Request Usage: Table 80.127. Command arguments Value Summary -h, --help Show this help message and exit --transfer-id TRANSFER_ID Transfer id --key KEY Transfer key --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.128. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.129. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.130. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.131. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.28. zone transfer accept show Show Zone Transfer Accept Usage: Table 80.132. Positional arguments Value Summary id Zone tranfer accept id Table 80.133. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.134. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.135. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.136. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.137. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.29. zone transfer request create Create new zone transfer request Usage: Table 80.138. Positional arguments Value Summary zone_id Zone id to transfer. Table 80.139. Command arguments Value Summary -h, --help Show this help message and exit --target-project-id TARGET_PROJECT_ID Target project id to transfer to. --description DESCRIPTION Description --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.140. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.141. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.142. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.143. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.30. zone transfer request delete Delete a Zone Transfer Request Usage: Table 80.144. Positional arguments Value Summary id Zone transfer request id Table 80.145. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 80.31. zone transfer request list List Zone Transfer Requests Usage: Table 80.146. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.147. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 80.148. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.149. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.150. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.32. zone transfer request set Set a Zone Transfer Request Usage: Table 80.151. Positional arguments Value Summary id Zone transfer request id Table 80.152. Command arguments Value Summary -h, --help Show this help message and exit --description DESCRIPTION Description --no-description- target-project-id TARGET_PROJECT_ID Target project id to transfer to. --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.153. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.154. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.155. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.156. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.33. zone transfer request show Show Zone Transfer Request Details Usage: Table 80.157. Positional arguments Value Summary id Zone tranfer request id Table 80.158. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 80.159. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 80.160. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.161. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.162. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack zone abandon [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone axfr [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone blacklist create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --pattern PATTERN [--description DESCRIPTION] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone blacklist delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone blacklist list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone blacklist set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--pattern PATTERN] [--description DESCRIPTION | --no-description] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone blacklist show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--email EMAIL] [--type {PRIMARY,SECONDARY}] [--ttl TTL] [--description DESCRIPTION] [--masters MASTERS [MASTERS ...]] [--attributes ATTRIBUTES [ATTRIBUTES ...]] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] name", "openstack zone delete [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--delete-shares] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--hard-delete] id", "openstack zone export create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_id", "openstack zone export delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_export_id", "openstack zone export list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone export show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_export_id", "openstack zone export showfile [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_export_id", "openstack zone import create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_file_path", "openstack zone import delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_import_id", "openstack zone import list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone import show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_import_id", "openstack zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name NAME] [--email EMAIL] [--type {PRIMARY,SECONDARY}] [--ttl TTL] [--description DESCRIPTION] [--status STATUS] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--email EMAIL] [--ttl TTL] [--description DESCRIPTION | --no-description] [--masters MASTERS [MASTERS ...]] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone share create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone target_project_id", "openstack zone share delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone shared_zone_id", "openstack zone share list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--target-project-id TARGET_PROJECT_ID] zone", "openstack zone share show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone shared_zone_id", "openstack zone show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone transfer accept list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone transfer accept request [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --transfer-id TRANSFER_ID --key KEY [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone transfer accept show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone transfer request create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--target-project-id TARGET_PROJECT_ID] [--description DESCRIPTION] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_id", "openstack zone transfer request delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone transfer request list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack zone transfer request set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description DESCRIPTION | --no-description] [--target-project-id TARGET_PROJECT_ID] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id", "openstack zone transfer request show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/zone
Chapter 9. Federal Standards and Regulations
Chapter 9. Federal Standards and Regulations In order to maintain security levels, it is possible for your organization to make efforts to comply with federal and industry security specifications, standards and regulations. This chapter describes some of these standards and regulations. 9.1. Federal Information Processing Standard (FIPS) The Federal Information Processing Standard (FIPS) Publication 140-2 is a computer security standard, developed by the U.S. Government and industry working group to validate the quality of cryptographic modules. See the official FIPS publications at NIST Computer Security Resource Center . The FIPS 140-2 standard ensures that cryptographic tools implement their algorithms properly. See the full FIPS 140-2 standard at http://dx.doi.org/10.6028/NIST.FIPS.140-2 for further details on these levels and the other specifications of the FIPS standard. To learn about compliance requirements, see the Red Hat Government Standards page . 9.1.1. Enabling FIPS Mode To make Red Hat Enterprise Linux compliant with the Federal Information Processing Standard (FIPS) Publication 140-2, you need to make several changes to ensure that accredited cryptographic modules are used. You can either enable FIPS mode during system installation or after it. During the System Installation To fulfil the strict FIPS 140-2 compliance , add the fips=1 kernel option to the kernel command line during system installation. With this option, all keys' generations are done with FIPS-approved algorithms and continuous monitoring tests in place. After the installation, the system is configured to boot into FIPS mode automatically. Important Ensure that the system has plenty of entropy during the installation process by moving the mouse around or by pressing many keystrokes. The recommended amount of keystrokes is 256 and more. Less than 256 keystrokes might generate a non-unique key. After the System Installation To turn the kernel space and user space of your system into FIPS mode after installation, follow these steps: Install the dracut-fips package: For CPUs with the AES New Instructions (AES-NI) support, install the dracut-fips-aesni package as well: Regenerate the initramfs file: To enable the in-module integrity verification and to have all required modules present during the kernel boot, the initramfs file has to be regenerated. Warning This operation will overwrite the existing initramfs file. Modify boot loader configuration. To boot into FIPS mode, add the fips=1 option to the kernel command line of the boot loader. If your /boot partition resides on a separate partition, add the boot= <partition> (where <partition> stands for /boot ) parameter to the kernel command line as well. To identify the boot partition, enter the following command: To ensure that the boot= configuration option works even if the device naming changes between boots, identify the universally unique identifier (UUID) of the partition by running the following command: Append the UUID to the kernel command line: Depending on your boot loader, make the following changes: GRUB 2 Add the fips=1 and boot=<partition of /boot> options to the GRUB_CMDLINE_LINUX key in the /etc/default/grub file. To apply the changes to /etc/default/grub , rebuild the grub.cfg file as follows: On BIOS-based machines, enter the following command as root : On UEFI-based machines, enter the following command as root : zipl (on the IBM Z Systems architecture only) Add the fips=1 and boot=<partition of /boot> options to the /etc/zipl.conf to the kernel command line and apply the changes by entering: Make sure prelinking is disabled. For proper operation of the in-module integrity verification, prelinking of libraries and binaries has to be disabled. Prelinking is done by the prelink package, which is not installed by default. Unless prelink has been installed, this step is not needed. To disable prelinking, set the PRELINKING=no option in the /etc/sysconfig/prelink configuration file. To disable existing prelinking on all system files, use the prelink -u -a command. Reboot your system. Enabling FIPS Mode in a Container A container can be switched to FIPS140-2 mode if the host is also set in FIPS140-2 mode and one of the following requirements is met: The dracut-fips package is installed in the container. The /etc/system-fips file is mounted on the container from the host.
[ "~]# yum install dracut-fips", "~]# yum install dracut-fips-aesni", "~]# dracut -v -f", "~]USD df /boot Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 495844 53780 416464 12% /boot", "~]USD blkid /dev/sda1 /dev/sda1: UUID=\"05c000f1-f899-467b-a4d9-d5ca4424c797\" TYPE=\"ext4\"", "boot=UUID= 05c000f1-f899-467b-a4d9-d5ca4424c797", "~]# grub2-mkconfig -o /etc/grub2.cfg", "~]# grub2-mkconfig -o /etc/grub2-efi.cfg", "~]# zipl" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/chap-federal_standards_and_regulations
8.2. Defining Compliance Policy
8.2. Defining Compliance Policy The security or compliance policy is rarely written from scratch. ISO 27000 standard series, derivative works, and other sources provide security policy templates and practice recommendations that should be helpful to start with. However, organizations building theirs information security program need to amend the policy templates to align with their needs. The policy template should be chosen on the basis of its relevancy to the company environment and then the template has to be adjusted because either the template contains build-in assumptions which cannot be applied to the organization, or the template explicitly requires that certain decisions have to be made. Red Hat Enterprise Linux auditing capabilities are based on the Security Content Automation Protocol (SCAP) standard. SCAP is a synthesis of interoperable specifications that standardize the format and nomenclature by which software flaw and security configuration information is communicated, both to machines and humans. SCAP is a multi-purpose framework of specifications that supports automated configuration, vulnerability and patch checking, technical control compliance activities, and security measurement. In other words, SCAP is a vendor-neutral way of expressing security policy, and as such it is widely used in modern enterprises. SCAP specifications create an ecosystem where the format of security content is well known and standardized while the implementation of the scanner or policy editor is not mandated. Such a status enables organizations to build their security policy (SCAP content) once, no matter how many security vendors do they employ. The latest version of SCAP includes several underlying standards. These components are organized into groups according to their function within SCAP as follows: SCAP Components Languages - This group consists of SCAP languages that define standard vocabularies and conventions for expressing compliance policy. The eXtensible Configuration Checklist Description Format (XCCDF) - A language designed to express, organize, and manage security guidance. Open Vulnerability and Assessment Language (OVAL) - A language developed to perform logical assertion about the state of the scanned system. Open Checklist Interactive Language (OCIL) - A language designed to provide a standard way to query users and interpret user responses to the given questions. Asset Identification (AI) - A language developed to provide a data model, methods, and guidance for identifying security assets. Asset Reporting Format (ARF) - A language designed to express the transport format of information about collected security assets and the relationship between assets and security reports. Enumerations - This group includes SCAP standards that define naming format and an official list or dictionary of items from certain security-related areas of interest. Common Configuration Enumeration (CCE) - An enumeration of security-relevant configuration elements for applications and operating systems. Common Platform Enumeration (CPE) - A structured naming scheme used to identify information technology (IT) systems, platforms, and software packages. Common Vulnerabilities and Exposures (CVE) - A reference method to a collection of publicly known software vulnerabilities and exposures. Metrics - This group comprises of frameworks to identify and evaluate security risks. Common Configuration Scoring System (CCSS) - A metric system to evaluate security-relevant configuration elements and assign them scores in order to help users to prioritize appropriate response steps. Common Vulnerability Scoring System (CVSS) - A metric system to evaluate software vulnerabilities and assign them scores in order to help users prioritize their security risks. Integrity - An SCAP specification to maintain integrity of SCAP content and scan results. Trust Model for Security Automation Data (TMSAD) - A set of recommendations explaining usage of existing specification to represent signatures, hashes, key information, and identity information in context of an XML file within a security automation domain. Each of the SCAP components has its own XML-based document format and its XML name space. A compliance policy expressed in SCAP can either take a form of a single OVAL definition XML file, data stream file, single zip archive, or a set of separate XML files containing an XCCDF file that represents a policy checklist. 8.2.1. The XCCDF File Format The XCCDF language is designed to support information interchange, document generation, organizational and situational tailoring, automated compliance testing, and compliance scoring. The language is mostly descriptive and does not contain any commands to perform security scans. However, an XCCDF document can refer to other SCAP components, and as such it can be used to craft a compliance policy that is portable among all the target platforms with the exception of the related assessment documents (OVAL, OCIL). The common way to represent a compliance policy is a set of XML files where one of the files is an XCCDF checklist. This XCCDF file usually points to the assessment resources, multiple OVAL, OCIL and the Script Check Engine (SCE) files. Furthermore, the file set can contain a CPE dictionary file and an OVAL file defining objects for this dictionary. Being an XML-based language, the XCCDF defines and uses a vast selection of XML elements and attributes. The following list briefly introduces the main XCCDF elements; for more details about XCCDF, consult the NIST Interagency Report 7275 Revision 4 . Main XML Elements of the XCCDF Document <xccdf:Benchmark> - This is a root element that encloses the whole XCCDF document. It may also contain checklist metadata, such as a title, description, list of authors, date of the latest modification, and status of the checklist acceptance. <xccdf:Rule> - This is a key element that represents a checklist requirement and holds its description. It may contain child elements that define actions verifying or enforcing compliance with the given rule or modify the rule itself. <xccdf:Value> - This key element is used for expressing properties of other XCCDF elements within the benchmark. <xccdf:Group> - This element is used to organize an XCCDF document to structures with the same context or requirement domains by gathering the <xccdf:Rule> , <xccdf:Value> , and <xccdf:Group> elements. <xccdf:Profile> - This element serves for a named tailoring of the XCCDF benchmark. It allows the benchmark to hold several different tailorings. <xccdf:Profile> utilizes several selector elements, such as <xccdf:select> or <xccdf:refine-rule> , to determine which elements are going to be modified and processed while it is in effect. <xccdf:Tailoring> - This element allows defining the benchmark profiles outside the benchmark, which is sometimes desirable for manual tailoring of the compliance policy. <xccdf:TestResult> - This element serves for keeping the scan results for the given benchmark on the target system. Each <xccdf:TestResult> should refer to the profile that was used to define the compliance policy for the particular scan and it should also contain important information about the target system that is relevant for the scan. <xccdf:rule-result> - This is a child element of <xccdf:TestResult> that is used to hold the result of applying a specific rule from the benchmark to the target system. <xccdf:fix> - This is a child element of <xccdf:Rule> that serves for remediation of the target system that is not compliant with the given rule. It can contain a command or script that is run on the target system in order to bring the system into compliance the rule. <xccdf:check> - This is a child element of <xccdf:Rule> that refers to an external source which defines how to evaluate the given rule. <xccdf:select> - This is a selector element that is used for including or excluding the chosen rules or groups of rules from the policy. <xccdf:set-value> - This is a selector element that is used for overwriting the current value of the specified <xccdf:Value> element without modifying any of its other properties. <xccdf:refine-value> - This is a selector element that is used for specifying constraints of the particular <xccdf:Value> element during policy tailoring. <xccdf:refine-rule> - This selector element allows overwriting properties of the selected rules. Example 8.1. An Example of an XCCDF Document <?xml version="1.0" encoding="UTF-8"?> <Benchmark xmlns="http://checklists.nist.gov/xccdf/1.2" id="xccdf_com.example.www_benchmark_test"> <status>incomplete</status> <version>0.1</version> <Profile id="xccdf_com.example.www_profile_1"> <title>Profile title is compulsory</title> <select idref="xccdf_com.example.www_group_1" selected="true"/> <select idref="xccdf_com.example.www_rule_1" selected="true"/> <refine-value idref="xccdf_com.example.www_value_1" selector="telnet service"/> </Profile> <Group id="xccdf_com.example.www_group_1"> <Value id="xccdf_com.example.www_value_1"> <value selector="telnet_service">telnet-server</value> <value selector="dhcp_servide">dhcpd</value> <value selector="ftp_service">tftpd</value> </Value> <Rule id="xccdf_com.example.www_rule_1"> <title>The telnet-server Package Shall Not Be Installed </title> <rationale> Removing the telnet-server package decreases the risk of the telnet service's accidental (or intentional) activation </rationale> <fix platform="cpe:/o:redhat:enterprise_linux:6" reboot="false" disruption="low" system="urn:xccdf:fix:script:sh"> yum -y remove <sub idref="xccdf_com.example.www_value_1"/> </fix> <check system="http://oval.mitre.org/XMLSchema/oval-definitions-5"> <check-export value-id="xccdf_com.example.www_value_1" export-name="oval:com.example.www:var:1"/> <check-content-ref href="examplary.oval.xml" name="oval:com.example.www:def:1"/> </check> <check system="http://open-scap.org/page/SCE"> <check-import import-name="stdout"/> <check-content-ref href="telnet_server.sh"/> </check> </Rule> </Group> </Benchmark>
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <Benchmark xmlns=\"http://checklists.nist.gov/xccdf/1.2\" id=\"xccdf_com.example.www_benchmark_test\"> <status>incomplete</status> <version>0.1</version> <Profile id=\"xccdf_com.example.www_profile_1\"> <title>Profile title is compulsory</title> <select idref=\"xccdf_com.example.www_group_1\" selected=\"true\"/> <select idref=\"xccdf_com.example.www_rule_1\" selected=\"true\"/> <refine-value idref=\"xccdf_com.example.www_value_1\" selector=\"telnet service\"/> </Profile> <Group id=\"xccdf_com.example.www_group_1\"> <Value id=\"xccdf_com.example.www_value_1\"> <value selector=\"telnet_service\">telnet-server</value> <value selector=\"dhcp_servide\">dhcpd</value> <value selector=\"ftp_service\">tftpd</value> </Value> <Rule id=\"xccdf_com.example.www_rule_1\"> <title>The telnet-server Package Shall Not Be Installed </title> <rationale> Removing the telnet-server package decreases the risk of the telnet service's accidental (or intentional) activation </rationale> <fix platform=\"cpe:/o:redhat:enterprise_linux:6\" reboot=\"false\" disruption=\"low\" system=\"urn:xccdf:fix:script:sh\"> yum -y remove <sub idref=\"xccdf_com.example.www_value_1\"/> </fix> <check system=\"http://oval.mitre.org/XMLSchema/oval-definitions-5\"> <check-export value-id=\"xccdf_com.example.www_value_1\" export-name=\"oval:com.example.www:var:1\"/> <check-content-ref href=\"examplary.oval.xml\" name=\"oval:com.example.www:def:1\"/> </check> <check system=\"http://open-scap.org/page/SCE\"> <check-import import-name=\"stdout\"/> <check-content-ref href=\"telnet_server.sh\"/> </check> </Rule> </Group> </Benchmark>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Defining_Compliance_Policy
Architecture
Architecture Red Hat OpenShift Service on AWS 4 Architecture overview. Red Hat OpenShift Documentation Team
[ "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/architecture/index
function::kernel_string
function::kernel_string Name function::kernel_string - Retrieves string from kernel memory. Synopsis Arguments addr The kernel address to retrieve the string from. General Syntax kernel_string:string(addr:long) Description This function returns the null terminated C string from a given kernel memory address. Reports an error on string copy fault.
[ "function kernel_string:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kernel-string
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide describes how to develop Ansible automation content and how to use it to run automation jobs from Red Hat Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/pr01
Chapter 20. DelegatedRegistryConfigService
Chapter 20. DelegatedRegistryConfigService 20.1. GetClusters GET /v1/delegatedregistryconfig/clusters GetClusters returns the list of clusters (id + name) and a flag indicating whether or not the cluster is valid for use in the delegated registry config 20.1.1. Description 20.1.2. Parameters 20.1.3. Return Type V1DelegatedRegistryClustersResponse 20.1.4. Content Type application/json 20.1.5. Responses Table 20.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1DelegatedRegistryClustersResponse 0 An unexpected error response. GooglerpcStatus 20.1.6. Samples 20.1.7. Common object reference 20.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 20.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 20.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 20.1.7.3. V1DelegatedRegistryCluster Field Name Required Nullable Type Description Format id String name String isValid Boolean 20.1.7.4. V1DelegatedRegistryClustersResponse Field Name Required Nullable Type Description Format clusters List of V1DelegatedRegistryCluster 20.2. GetConfig GET /v1/delegatedregistryconfig GetConfig returns the current delegated registry configuration 20.2.1. Description 20.2.2. Parameters 20.2.3. Return Type V1DelegatedRegistryConfig 20.2.4. Content Type application/json 20.2.5. Responses Table 20.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1DelegatedRegistryConfig 0 An unexpected error response. GooglerpcStatus 20.2.6. Samples 20.2.7. Common object reference 20.2.7.1. DelegatedRegistryConfigDelegatedRegistry Field Name Required Nullable Type Description Format path String clusterId String 20.2.7.2. DelegatedRegistryConfigEnabledFor NONE: Scan all images via central services except for images from the OCP integrated registry - ALL: Scan all images via the secured clusters - SPECIFIC: Scan images that match registries or are from the OCP integrated registry via the secured clusters otherwise scan via central services Enum Values NONE ALL SPECIFIC 20.2.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 20.2.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 20.2.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 20.2.7.5. V1DelegatedRegistryConfig DelegatedRegistryConfig determines if and where scan requests are delegated to, such as kept in central services or sent to particular secured clusters. Field Name Required Nullable Type Description Format enabledFor DelegatedRegistryConfigEnabledFor NONE, ALL, SPECIFIC, defaultClusterId String registries List of DelegatedRegistryConfigDelegatedRegistry If enabled for is NONE registries has no effect. If ALL registries directs ad-hoc requests to the specified secured clusters if the path matches. If SPECIFIC registries directs ad-hoc requests to the specified secured clusters just like with ALL , but in addition images that match the specified paths will be scanned locally by the secured clusters (images from the OCP integrated registry are always scanned locally). Images that do not match a path will be scanned via central services 20.3. UpdateConfig PUT /v1/delegatedregistryconfig UpdateConfig updates the stored delegated registry configuration 20.3.1. Description 20.3.2. Parameters 20.3.2.1. Body Parameter Name Description Required Default Pattern body DelegatedRegistryConfig determines if and where scan requests are delegated to, such as kept in central services or sent to particular secured clusters. V1DelegatedRegistryConfig X 20.3.3. Return Type V1DelegatedRegistryConfig 20.3.4. Content Type application/json 20.3.5. Responses Table 20.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1DelegatedRegistryConfig 0 An unexpected error response. GooglerpcStatus 20.3.6. Samples 20.3.7. Common object reference 20.3.7.1. DelegatedRegistryConfigDelegatedRegistry Field Name Required Nullable Type Description Format path String clusterId String 20.3.7.2. DelegatedRegistryConfigEnabledFor NONE: Scan all images via central services except for images from the OCP integrated registry - ALL: Scan all images via the secured clusters - SPECIFIC: Scan images that match registries or are from the OCP integrated registry via the secured clusters otherwise scan via central services Enum Values NONE ALL SPECIFIC 20.3.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 20.3.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 20.3.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 20.3.7.5. V1DelegatedRegistryConfig DelegatedRegistryConfig determines if and where scan requests are delegated to, such as kept in central services or sent to particular secured clusters. Field Name Required Nullable Type Description Format enabledFor DelegatedRegistryConfigEnabledFor NONE, ALL, SPECIFIC, defaultClusterId String registries List of DelegatedRegistryConfigDelegatedRegistry If enabled for is NONE registries has no effect. If ALL registries directs ad-hoc requests to the specified secured clusters if the path matches. If SPECIFIC registries directs ad-hoc requests to the specified secured clusters just like with ALL , but in addition images that match the specified paths will be scanned locally by the secured clusters (images from the OCP integrated registry are always scanned locally). Images that do not match a path will be scanned via central services
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/delegatedregistryconfigservice
Appendix A. Ceph subsystems default logging level values
Appendix A. Ceph subsystems default logging level values A table of the default logging level values for the various Ceph subsystems. Subsystem Log Level Memory Level asok 1 5 auth 1 5 buffer 0 0 client 0 5 context 0 5 crush 1 5 default 0 5 filer 0 5 bluestore 1 5 finisher 1 5 heartbeatmap 1 5 javaclient 1 5 journaler 0 5 journal 1 5 lockdep 0 5 mds balancer 1 5 mds locker 1 5 mds log expire 1 5 mds log 1 5 mds migrator 1 5 mds 1 5 monc 0 5 mon 1 5 ms 0 5 objclass 0 5 objectcacher 0 5 objecter 0 0 optracker 0 5 osd 0 5 paxos 0 5 perfcounter 1 5 rados 0 5 rbd 0 5 rgw 1 5 throttle 1 5 timer 0 5 tp 0 5
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/ceph-subsystems-default-logging-level-values_diag
Chapter 369. Weka Component
Chapter 369. Weka Component Since Camel 3.1 Only producer is supported The Weka component provides access to the (Weka Data Mining) toolset. Weka is tried and tested open source machine learning software that can be accessed through a graphical user interface, standard terminal applications, or a Java API. It is widely used for teaching, research, and industrial applications, contains a plethora of built-in tools for standard machine learning tasks, and additionally gives transparent access to well-known toolboxes such as scikit-learn, R, and Deeplearning4j. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-weka</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 369.1. URI format weka://cmd 369.2. Options The Weka component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean basicPropertyBinding (advanced) Whether the component should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities false boolean The Weka endpoint is configured using URI syntax: with the following path and query parameters: 369.2.1. Path Parameters (1 parameters): Name Description Default Type command Required The command to use. The value can be one of: filter, model, read, write, push, pop, version Command 369.2.2. Query Parameters (12 parameters): Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean basicPropertyBinding (advanced) Whether the endpoint should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean apply (filter) The filter spec (i.e. Name Options) String build (model) The classifier spec (i.e. Name Options) String dsname (model) The named dataset to train the classifier with String folds (model) Number of folds to use for cross-validation 10 int loadFrom (model) Path to load the model from String saveTo (model) Path to save the model to String seed (model) An optional seed for the randomizer 1 int xval (model) Flag on whether to use cross-validation with the current dataset false boolean path (write) An in/out path for the read/write commands String 369.3. Karaf support This component is not supported in Karaf 369.4. Message Headers 369.5. Samples 369.5.1. Read + Filter + Write This first example shows how to read a CSV file with the file component and then pass it on to Weka. In Weka we apply a few filters to the data set and then pass it on to the file component for writing. @Override public void configure() throws Exception { // Use the file component to read the CSV file from("file:src/test/resources/data?fileName=sfny.csv") // Convert the 'in_sf' attribute to nominal .to("weka:filter?apply=NumericToNominal -R first") // Move the 'in_sf' attribute to the end .to("weka:filter?apply=Reorder -R 2-last,1") // Rename the relation .to("weka:filter?apply=RenameRelation -modify sfny") // Use the file component to write the Arff file .to("file:target/data?fileName=sfny.arff") } Here we do the same as above without use of the file component. @Override public void configure() throws Exception { // Initiate the route from somewhere .from("...") // Use Weka to read the CSV file .to("weka:read?path=src/test/resources/data/sfny.csv") // Convert the 'in_sf' attribute to nominal .to("weka:filter?apply=NumericToNominal -R first") // Move the 'in_sf' attribute to the end .to("weka:filter?apply=Reorder -R 2-last,1") // Rename the relation .to("weka:filter?apply=RenameRelation -modify sfny") // Use Weka to write the Arff file .to("weka:write?path=target/data/sfny.arff"); } In this example, the client would provide the input path or some other supported type. Have a look at the WekaTypeConverters for the set of supported input types. @Override public void configure() throws Exception { // Initiate the route from somewhere .from("...") // Convert the 'in_sf' attribute to nominal .to("weka:filter?apply=NumericToNominal -R first") // Move the 'in_sf' attribute to the end .to("weka:filter?apply=Reorder -R 2-last,1") // Rename the relation .to("weka:filter?apply=RenameRelation -modify sfny") // Use Weka to write the Arff file .to("weka:write?path=target/data/sfny.arff"); } 369.5.2. Building a Model When building a model, we first choose the classification algorithm to use and then train it with some data. The result is the trained model that we can later use to classify unseen data. Here we train J48 with 10 fold cross-validation. try (CamelContext camelctx = new DefaultCamelContext()) { camelctx.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { // Use the file component to read the training data from("file:src/test/resources/data?fileName=sfny-train.arff") // Build a J48 classifier using cross-validation with 10 folds .to("weka:model?build=J48&xval=true&folds=10&seed=1") // Persist the J48 model .to("weka:model?saveTo=src/test/resources/data/sfny-j48.model") } }); camelctx.start(); } 369.5.3. Predicting a Class Here we use a Processor to access functionality that is not directly available from endpoint URIs. In case you come here directly and this syntax looks a bit overwhelming, you might want to have a brief look at the section about Nessus API Concepts . try (CamelContext camelctx = new DefaultCamelContext()) { camelctx.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { // Use the file component to read the test data from("file:src/test/resources/data?fileName=sfny-test.arff") // Remove the class attribute .to("weka:filter?apply=Remove -R last") // Add the 'prediction' placeholder attribute .to("weka:filter?apply=Add -N predicted -T NOM -L 0,1") // Rename the relation .to("weka:filter?apply=RenameRelation -modify sfny-predicted") // Load an already existing model .to("weka:model?loadFrom=src/test/resources/data/sfny-j48.model") // Use a processor to do the prediction .process(new Processor() { public void process(Exchange exchange) throws Exception { Dataset dataset = exchange.getMessage().getBody(Dataset.class); dataset.applyToInstances(new NominalPredictor()); } }) // Write the data file .to("weka:write?path=src/test/resources/data/sfny-predicted.arff") } }); camelctx.start(); } 369.6. Resources Practical Machine Learning Tools and Techniques Machine Learning Courses Weka Documentation Nessus-Weka
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-weka</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "weka://cmd", "weka:command", "@Override public void configure() throws Exception { // Use the file component to read the CSV file from(\"file:src/test/resources/data?fileName=sfny.csv\") // Convert the 'in_sf' attribute to nominal .to(\"weka:filter?apply=NumericToNominal -R first\") // Move the 'in_sf' attribute to the end .to(\"weka:filter?apply=Reorder -R 2-last,1\") // Rename the relation .to(\"weka:filter?apply=RenameRelation -modify sfny\") // Use the file component to write the Arff file .to(\"file:target/data?fileName=sfny.arff\") }", "@Override public void configure() throws Exception { // Initiate the route from somewhere .from(\"...\") // Use Weka to read the CSV file .to(\"weka:read?path=src/test/resources/data/sfny.csv\") // Convert the 'in_sf' attribute to nominal .to(\"weka:filter?apply=NumericToNominal -R first\") // Move the 'in_sf' attribute to the end .to(\"weka:filter?apply=Reorder -R 2-last,1\") // Rename the relation .to(\"weka:filter?apply=RenameRelation -modify sfny\") // Use Weka to write the Arff file .to(\"weka:write?path=target/data/sfny.arff\"); }", "@Override public void configure() throws Exception { // Initiate the route from somewhere .from(\"...\") // Convert the 'in_sf' attribute to nominal .to(\"weka:filter?apply=NumericToNominal -R first\") // Move the 'in_sf' attribute to the end .to(\"weka:filter?apply=Reorder -R 2-last,1\") // Rename the relation .to(\"weka:filter?apply=RenameRelation -modify sfny\") // Use Weka to write the Arff file .to(\"weka:write?path=target/data/sfny.arff\"); }", "try (CamelContext camelctx = new DefaultCamelContext()) { camelctx.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { // Use the file component to read the training data from(\"file:src/test/resources/data?fileName=sfny-train.arff\") // Build a J48 classifier using cross-validation with 10 folds .to(\"weka:model?build=J48&xval=true&folds=10&seed=1\") // Persist the J48 model .to(\"weka:model?saveTo=src/test/resources/data/sfny-j48.model\") } }); camelctx.start(); }", "try (CamelContext camelctx = new DefaultCamelContext()) { camelctx.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { // Use the file component to read the test data from(\"file:src/test/resources/data?fileName=sfny-test.arff\") // Remove the class attribute .to(\"weka:filter?apply=Remove -R last\") // Add the 'prediction' placeholder attribute .to(\"weka:filter?apply=Add -N predicted -T NOM -L 0,1\") // Rename the relation .to(\"weka:filter?apply=RenameRelation -modify sfny-predicted\") // Load an already existing model .to(\"weka:model?loadFrom=src/test/resources/data/sfny-j48.model\") // Use a processor to do the prediction .process(new Processor() { public void process(Exchange exchange) throws Exception { Dataset dataset = exchange.getMessage().getBody(Dataset.class); dataset.applyToInstances(new NominalPredictor()); } }) // Write the data file .to(\"weka:write?path=src/test/resources/data/sfny-predicted.arff\") } }); camelctx.start(); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/weka-component
Chapter 1. Validating an installation
Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster using the web console for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 1.4. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 1.5. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.31.3 control-plane-1.example.com Ready master 41m v1.31.3 control-plane-2.example.com Ready master 45m v1.31.3 compute-2.example.com Ready worker 38m v1.31.3 compute-3.example.com Ready worker 33m v1.31.3 control-plane-3.example.com Ready master 41m v1.31.3 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 1.6. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 1.7. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Cluster List list in OpenShift Cluster Manager and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 1.8. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. 1.9. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts as an Administrator for further details about alerting in OpenShift Container Platform. 1.10. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster .
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.31.3 control-plane-1.example.com Ready master 41m v1.31.3 control-plane-2.example.com Ready master 45m v1.31.3 compute-2.example.com Ready worker 38m v1.31.3 compute-3.example.com Ready worker 33m v1.31.3 control-plane-3.example.com Ready master 41m v1.31.3", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/validation_and_troubleshooting/validating-an-installation
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.21/making-open-source-more-inclusive
Registry
Registry OpenShift Container Platform 4.17 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>", "podman pull registry.redhat.io/<repository_name>", "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule", "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local", "oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: swift: container: <container-id>", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc policy add-role-to-user registry-viewer <user_name>", "oc policy add-role-to-user registry-editor <user_name>", "oc get nodes", "oc debug nodes/<node_name>", "sh-4.2# chroot /host", "sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443", "sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "sh-4.2# podman pull <name.io>/<image>", "sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>", "sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>", "oc get pods -n openshift-image-registry", "NAME READY STATUS RESTARTS AGE image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m", "oc logs deployments/image-registry -n openshift-image-registry", "2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF", "oc adm policy add-cluster-role-to-user prometheus-scraper <username>", "openshift: oc whoami -t", "curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20", "HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "sudo mv tls.crt /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust enable", "sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1", "oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/registry/architecture-component-imageregistry
Chapter 2. NFV performance considerations
Chapter 2. NFV performance considerations For a network functions virtualization (NFV) solution to be useful, its virtualized functions must meet or exceed the performance of physical implementations. Red Hat's virtualization technologies are based on the high-performance Kernel-based Virtual Machine (KVM) hypervisor, common in OpenStack and cloud deployments. Red Hat OpenStack Platform director configures the Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest virtual network functions (VNFs). The key performance factors in the NFV use case are throughput, latency, and jitter. You can enable high-performance packet switching between physical NICs and virtual machines using data plane development kit (DPDK) accelerated virtual machines. OVS 2.10 embeds support for DPDK 17 and includes support for vhost-user multiqueue, allowing scalable performance. OVS-DPDK provides line-rate performance for guest VNFs. Single root I/O virtualization (SR-IOV) networking provides enhanced performance, including improved throughput for specific networks and virtual machines. Other important features for performance tuning include huge pages, NUMA alignment, host isolation, and CPU pinning. VNF flavors require huge pages and emulator thread isolation for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss. 2.1. CPUs and NUMA nodes Previously, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA). In Non-Uniform Memory Access (NUMA), system memory is divided into zones called nodes, which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system. Normally, each socket on a NUMA system has a local memory node whose contents can be accessed faster than the memory in the node local to another CPU or the memory on a bus shared by all CPUs. Similarly, physical NICs are placed in PCI slots on the Compute node hardware. These slots connect to specific CPU sockets that are associated to a particular NUMA node. For optimum performance, connect your datapath NICs to the same NUMA nodes in your CPU configuration (SR-IOV or OVS-DPDK). The performance impact of NUMA misses are significant, generally starting at a 10% performance hit or higher. Each CPU socket can have multiple CPU cores which are treated as individual CPUs for virtualization purposes. Tip For more information about NUMA, see What is NUMA and how does it work on Linux? 2.1.1. NUMA node example The following diagram provides an example of a two-node NUMA system and the way the CPU cores and memory pages are made available: Figure 2.1. Example: two-node NUMA system Note Remote memory available via Interconnect is accessed only if VM1 from NUMA node 0 has a CPU core in NUMA node 1. In this case, the memory of NUMA node 1 acts as local for the third CPU core of VM1 (for example, if VM1 is allocated with CPU 4 in the diagram above), but at the same time, it acts as remote memory for the other CPU cores of the same VM. 2.1.2. NUMA aware instances You can configure an OpenStack environment to use NUMA topology awareness on systems with a NUMA architecture. When running a guest operating system in a virtual machine (VM) there are two NUMA topologies involved: the NUMA topology of the physical hardware of the host the NUMA topology of the virtual hardware exposed to the guest operating system You can optimize the performance of guest operating systems by aligning the virtual hardware with the physical hardware NUMA topology. 2.2. CPU pinning CPU pinning is the ability to run a specific virtual machine's virtual CPU on a specific physical CPU, in a given host. vCPU pinning provides similar advantages to task pinning on bare-metal systems. Since virtual machines run as user space tasks on the host operating system, pinning increases cache efficiency. For details on how to configure CPU pinning, see link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-cpus-on-compute-nodes#assembly_configuring-cpu-pinning-on-compute-nodes_cpu-pinning [Configuring CPU pinning on Compute nodes] in the Configuring the Compute service for instance creation guide. 2.3. Huge pages Physical memory is segmented into contiguous regions called pages. For efficiency, the system retrieves memory by accessing entire pages instead of individual bytes of memory. To perform this translation, the system looks in the Translation Lookaside Buffers (TLB) that contain the physical to virtual address mappings for the most recently or frequently used pages. When the system cannot find a mapping in the TLB, the processor must iterate through all of the page tables to determine the address mappings. Optimize the TLB to minimize the performance penalty that occurs during these TLB misses. The typical page size in an x86 system is 4KB, with other larger page sizes available. Larger page sizes mean that there are fewer pages overall, and therefore increases the amount of system memory that can have its virtual to physical address translation stored in the TLB. Consequently, this reduces TLB misses, which increases performance. With larger page sizes, there is an increased potential for memory to be under-utilized as processes must allocate in pages, but not all of the memory is likely required. As a result, choosing a page size is a compromise between providing faster access times with larger pages, and ensuring maximum memory utilization with smaller pages. 2.4. Port security Port security is an anti-spoofing measure that blocks any egress traffic that does not match the source IP and source MAC address of the originating network port. You cannot view or modify this behavior using security group rules. By default, the port_security_enabled parameter is set to enabled on newly created Neutron networks in OpenStack. Newly created ports copy the value of the port_security_enabled parameter from the network they are created on. For some NFV use cases, such as building a firewall or router, you must disable port security. To disable port security on a single port, run the following command: To prevent port security from being enabled on any newly created port on a network, run the following command:
[ "openstack port set --disable-port-security <port-id>", "openstack network set --disable-port-security <network-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/nfv-perf-consider_rhosp-nfv
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 0.10-06 Tue 03 Mar 2020 Marc Muehlfeld Added the Configuring Policy-based Routing to Define Alternative Routes section. Revision 0.10-05 Fri 22 Nov 2019 Marc Muehlfeld Rewrote the Configuring the Squid Caching Proxy Server chapter. Revision 0.10-04 Tue 06 Aug 2019 Marc Muehlfeld Version for 7.7 GA publication. Revision 0.10-03 Thu 22 Mar 2018 Ioanna Gkioka Version for 7.5 GA publication. Revision 0.10-02 Mon 14 Aug 2017 Ioanna Gkioka Async release with misc. updates Revision 0.10-01 Tue 25 Jul 2017 Mirek Jahoda Version for 7.4 GA publication. Revision 0.9-30 Tue 18 Oct 2016 Mirek Jahoda Version for 7.3 GA publication. Revision 0.9-25 Wed 11 Nov 2015 Jana Heves Version for 7.2 GA release. Revision 0.9-15 Tue 17 Feb 2015 Christian Huffman Version for 7.1 GA release Revision 0.9-14 Fri Dec 05 2014 Christian Huffman Updated the nmtui and NetworkManager GUI sections. Revision 0.9-12 Wed Nov 05 2014 Stephen Wadeley Improved IP Networking , 802.1Q VLAN tagging , and Teaming . Revision 0.9-11 Tues Oct 21 2014 Stephen Wadeley Improved Bonding , Bridging , and Teaming . Revision 0.9-9 Tue Sep 2 2014 Stephen Wadeley Improved Bonding and Consistent Network Device Naming . Revision 0.9-8 Tue July 8 2014 Stephen Wadeley Red Hat Enterprise Linux 7.0 GA release of the Networking Guide. Revision 0-0 Wed Dec 12 2012 Stephen Wadeley Initialization of the Red Hat Enterprise Linux 7 Networking Guide. B.1. Acknowledgments Certain portions of this text first appeared in the Red Hat Enterprise Linux 6 Deployment Guide ,
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/app-revision_history
Chapter 3. Customizing the Fuse Console branding
Chapter 3. Customizing the Fuse Console branding You can customize the Fuse Console branding information, such as title, logo, and login page information, by adding a hawtconfig.json file into your Fuse on Spring Boot standalone application. Procedure Create a JSON file named hawtconfig.json in your local Fuse on Spring Boot standalone application's src/main/webapp directory. Open the src/main/webapp/hawtconfig.json in an editor of your choice, and then add the following content: Change the values of the configuration properties listed in Table A.1, "Fuse Console Configuration Properties" . Save your changes. Run Fuse on Spring Boot by using the following command: In a web browser, open the Fuse Console by using this URL: http://localhost:10001/actuator/hawtio/index.html Note If you have already run the Fuse Console in a web browser, the branding is stored in the browser's local storage. To use new branding settings, you must clear the browser's local storage.
[ "{ \"branding\": { \"appName\": \"Red Hat Fuse Console\", \"appLogoUrl\": \"img/Logo-Red_Hat-Fuse-A-Reverse-RGB.png\", \"companyLogoUrl\": \"img/Logo-RedHat-A-Reverse-RGB.png\" }, \"login\": { \"description\": \"\", \"links\": [] }, \"about\": { \"title\": \"Red Hat Fuse Console\", \"productInfo\": [], \"additionalInfo\": \"\", \"copyright\": \"\", \"imgSrc\": \"img/Logo-RedHat-A-Reverse-RGB.png\" }, \"disabledRoutes\": [ \"/camel/source\", \"/diagnostics\", \"/jvm/discover\", \"/jvm/local\" ] }", "mvn spring-boot:run" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_springboot_standalone/fuse-console-branding-springboot
3.4. Comparison of virt-install and virt-manager Installation options
3.4. Comparison of virt-install and virt-manager Installation options This table provides a quick reference to compare equivalent virt-install and virt-manager installation options for when installing a virtual machine. Most virt-install options are not required. The minimum requirements are --name , --memory , guest storage ( --disk , --filesystem or --disk none ), and an install method ( --location , --cdrom , --pxe , --import , or boot ). These options are further specified with arguments; to see a complete list of command options and related arguments, enter the following command: In virt-manager , at minimum, a name, installation method, memory (RAM), vCPUs, storage are required. Table 3.1. virt-install and virt-manager configuration comparison for guest installations Configuration on virtual machine virt-install option virt-manager installation wizard label and step number Virtual machine name --name, -n Name (step 5) RAM to allocate (MiB) --ram, -r Memory (RAM) (step 3) Storage - specify storage media --disk Enable storage for this virtual machine Create a disk image on the computer's hard drive, or Select managed or other existing storage (step 4) Storage - export a host directory to the guest --filesystem Enable storage for this virtual machine Select managed or other existing storage (step 4) Storage - configure no local disk storage on the guest --nodisks Deselect the Enable storage for this virtual machine check box (step 4) Installation media location (local install) --file Local install media Locate your install media (steps 1-2) Installation using a distribution tree (network install) --location Network install URL (steps 1-2) Install guest with PXE --pxe Network boot (step 1) Number of vCPUs --vcpus CPUs (step 3) Host network --network Advanced options drop-down menu (step 5) Operating system variant/version --os-variant Version (step 2) Graphical display method --graphics, --nographics * virt-manager provides GUI installation only
[ "virt-install --help" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_machine_installation-virt-install-virt-manager-matrix
Chapter 1. Kubernetes overview
Chapter 1. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 1.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 1.1. Kubernetes components Table 1.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 1.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 1.2. Kubernetes cluster overview Table 1.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 1.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 1.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/getting_started/kubernetes-overview
Chapter 4. Quick Start Examples
Chapter 4. Quick Start Examples The examples in this section show you how to use the REST API to set up a basic Red Hat Virtualization environment and to create a virtual machine. In addition to the standard prerequisites, these examples require the following: A networked and configured Red Hat Virtualization installation. An ISO file containing the virtual machine operating system you want to install. This chapter uses CentOS 7 for the installation ISO example. The API examples use curl to demonstrate API requests with a client application. You can use any application that sends HTTP requests. Important The HTTP request headers in this example omit the Host and Authorization headers. However, these fields are mandatory and require data specific to your installation of Red Hat Virtualization. The curl examples use admin@internal for the user name, mypassword for the password, /etc/pki/ovirt-engine/ca.pem for the certificate location, and myengine.example.com for the host name. You must replace them with the correct values for your environment. Red Hat Virtualization generates a unique identifier for the id attribute for each resource. Identifier codes in this example will differ from the identifier codes in your Red Hat Virtualization environment. In many examples, some attributes of the results returned by the API have been omitted, for brevity. See, for example, the Cluster reference for a complete list of attributes. 4.1. Access API entry point The following request retrieves a representation of the main entry point for version 4 of the API: The same request, but using the /v4 URL prefix instead of the Version header: The same request, using the curl command: The result is an object of type Api : <api> <link href="/ovirt-engine/api/clusters" rel="clusters"/> <link href="/ovirt-engine/api/datacenters" rel="datacenters"/> ... <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>0</build> <full_version>4.0.0-0.0.el7</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> <special_objects> <blank_template href="..." id="..."/> <root_tag href="..." id="..."/> </special_objects> <summary> <hosts> <active>23</active> <total>30</total> </hosts> <storage_domains> <active>5</active> <total>6</total> </storage_domains> <users> <active>12</active> <total>102</total> </users> <vms> <active>253</active> <total>545</total> </vms> </summary> <time>2016-10-06T15:38:18.548+02:00</time> </api> Important When neither the header nor the URL prefix are used, the server will automatically select a version. The default is version 4 . You can change the default version using the ENGINE_API_DEFAULT_VERSION configuration parameter: Changing this parameter affects all users of the API that don't specify the version explicitly. The entry point provides a user with links to the collections in a virtualization environment. The rel attribute of each collection link provides a reference point for each link. The step in this example examines the data center collection, which is available through the datacenters link. The entry point also contains other data such as product_info , special_objects and summary . This data is covered in chapters outside this example. 4.2. List data centers Red Hat Virtualization creates a Default data center on installation. This example uses the Default data center as the basis for the virtual environment. The following request retrieves a representation of the data centers: The same request, using the curl command: The result will be a list of objects of type DataCenter : <data_centers> <data_center href="/ovirt-engine/api/datacenters/001" id="001"> <name>Default</name> <description>The default Data Center</description> <link href="/ovirt-engine/api/datacenters/001/clusters" rel="clusters"/> <link href="/ovirt-engine/api/datacenters/001/storagedomains" rel="storagedomains"/> ... <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> </data_center> ... </data_centers> Note the id of your Default data center. It identifies this data center in relation to other resources of your virtual environment. The data center also contains a link to the service that manages the storage domains attached to the data center: That service is used to attach storage domains from the main storagedomains collection, which this example covers later. 4.3. List host clusters Red Hat Virtualization creates a Default hosts cluster on installation. This example uses the Default cluster to group resources in your Red Hat Virtualization environment. The following request retrieves a representation of the cluster collection: The same request, using the curl command: The result will be a list of objects of type Cluster : <clusters> <cluster href="/ovirt-engine/api/clusters/002" id="002"> <name>Default</name> <description>The default server cluster</description> <link href="/ovirt-engine/api/clusters/002/networks" rel="networks"/> <link href="/ovirt-engine/api/clusters/002" rel="permissions"/> ... <cpu> <architecture>x86_64</architecture> <type>Intel Conroe Family</type> </cpu> <version> <major>4</major> <minor>0</minor> </version> <data_center href="/ovirt-engine/api/datacenters/001" id="001"/> </cluster> ... </clusters> Note the id of your Default host cluster. It identifies this host cluster in relation to other resources of your virtual environment. The Default cluster is associated with the Default data center through a relationship using the id and href attributes of the data_center link: The networks link is a reference to the service that manages the networks associated to this cluster. The section examines the networks collection in more detail. 4.4. List logical networks Red Hat Virtualization creates a default ovirtmgmt network on installation. This network acts as the management network for Red Hat Virtualization Manager to access hosts. This network is associated with the Default cluster and is a member of the Default data center. This example uses the ovirtmgmt network to connect the virtual machines. The following request retrieves the list of logical networks: The same request, using the curl command: The result will be a list of objects of type Network : <networks> <network href="/ovirt-engine/api/networks/003" id="003"> <name>ovirtmgmt</name> <description>Management Network</description> <link href="/ovirt-engine/api/networks/003/permissions" rel="permissions"/> <link href="/ovirt-engine/api/networks/003/vnicprofiles" rel="vnicprofiles"/> <link href="/ovirt-engine/api/networks/003/networklabels" rel="networklabels"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href="/ovirt-engine/api/datacenters/001" id="001"/> </network> ... </networks> The ovirtmgmt network is attached to the Default data center through a relationship using the data center's id . The ovirtmgmt network is also attached to the Default cluster through a relationship in the cluster's network sub-collection. 4.5. List hosts This example retrieves the list of hosts and shows a host named myhost registered with the virtualization environment: The same request, using the curl command: The result will be a list of objects of type Host : <hosts> <host href="/ovirt-engine/api/hosts/004" id="004"> <name>myhost</name> <link href="/ovirt-engine/api/hosts/004/nics" rel="nics"/> ... <address>node40.example.com</address> <cpu> <name>Intel Core Processor (Haswell, no TSX)</name> <speed>3600</speed> <topology> <cores>1</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> <memory>8371830784</memory> <os> <type>RHEL</type> <version> <full_version>7 - 2.1511.el7.centos.2.10</full_version> <major>7</major> </version> </os> <port>54321</port> <status>up</status> <cluster href="/ovirt-engine/api/clusters/002" id="002"/> </host> ... </hosts> Note the id of your host. It identifies this host in relation to other resources of your virtual environment. This host is a member of the Default cluster and accessing the nics sub-collection shows this host has a connection to the ovirtmgmt network. 4.6. Create NFS data storage An NFS data storage domain is an exported NFS share attached to a data center and provides storage for virtualized guest images. Creation of a new storage domain requires a POST request, with the storage domain representation included, sent to the URL of the storage domain collection. You can enable the wipe after delete option by default on the storage domain. To configure this specify wipe_after_delete in the POST request. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. The request should be like this: And the request body should be like this: <storage_domain> <name>mydata</name> <type>data</type> <description>My data</description> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain> The same request, using the curl command: The server uses host myhost to create a NFS data storage domain called mydata with an export path of mynfs.example.com:/exports/mydata . The API also returns the following representation of the newly created storage domain resource (of type StorageDomain ): <storage_domain href="/ovirt-engine/api/storagedomains/005" id="005"> <name>mydata</name> <description>My data</description> <available>42949672960</available> <committed>0</committed> <master>false</master> <status>unattached</status> <storage> <address>mynfs.example.com</address> <path>/exports/mydata</path> <type>nfs</type> </storage> <storage_format>v3</storage_format> <type>data</type> <used>9663676416</used> </storage_domain> 4.7. Create NFS ISO storage An NFS ISO storage domain is a mounted NFS share attached to a data center and provides storage for DVD/CD-ROM ISO and virtual floppy disk (VFD) image files. Creation of a new storage domain requires a POST request, with the storage domain representation included, sent to the URL of the storage domain collection: The request should be like this: And the request body should be like this: <storage_domain> <name>myisos</name> <description>My ISOs</description> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain> The same request, using the curl command: The server uses host myhost to create a NFS ISO storage domain called myisos with an export path of mynfs.example.com:/exports/myisos . The API also returns the following representation of the newly created storage domain resource (of type StorageDomain ): <storage_domain href="/ovirt-engine/api/storagedomains/006" id="006"> <name>myiso</name> <description>My ISOs</description> <available>42949672960</available> <committed>0</committed> <master>false</master> <status>unattached</status> <storage> <address>mynfs.example.com</address> <path>/exports/myisos</path> <type>nfs</type> </storage> <storage_format>v1</storage_format> <type>iso</type> <used>9663676416</used> </storage_domain> 4.8. Attach storage domains to data center The following example attaches the mydata and myisos storage domains to the Default data center. To attach the mydata storage domain, send a request like this: With a request body like this: <storage_domain> <name>mydata</name> </storage_domain> The same request, using the curl command: To attach the myisos storage domain, send a request like this: With a request body like this: <storage_domain> <name>myisos</name> </storage_domain> The same request, using the curl command: 4.9. Create virtual machine The following example creates a virtual machine called myvm on the Default cluster using the virtualization environment's Blank template as a basis. The request also defines the virtual machine's memory as 512 MiB and sets the boot device to a virtual hard disk. The request should be contain an object of type Vm describing the virtual machine to create: POST /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml Content-type: application/xml And the request body should be like this: <vm> <name>myvm</name> <description>My VM</description> <cluster> <name>Default</name> </cluster> <template> <name>Blank</name> </template> <memory>536870912</memory> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> </vm> The same request, using the curl command: The response body will be an object of the Vm type: <vm href="/ovirt-engine/api/vms/007" id="007"> <name>myvm</name> <link href="/ovirt-engine/api/vms/007/diskattachments" rel="diskattachments"/> <link href="/ovirt-engine/api/vms/007/nics" rel="nics"/> ... <cpu> <architecture>x86_64</architecture> <topology> <cores>1</cores> <sockets>1</sockets> <threads>1</threads> </topology> </cpu> <memory>1073741824</memory> <os> <boot> <devices> <device>hd</device> </devices> </boot> <type>other</type> </os> <type>desktop</type> <cluster href="/ovirt-engine/api/clusters/002" id="002"/> <status>down</status> <original_template href="/ovirt-engine/api/templates/000" id="00"/> <template href="/ovirt-engine/api/templates/000" id="000"/> </vm> 4.10. Create a virtual machine NIC The following example creates a virtual network interface to connect the example virtual machine to the ovirtmgmt network. The request should be like this: The request body should contain an object of type Nic describing the NIC to be created: <nic> <name>mynic</name> <description>My network interface card</description> </nic> The same request, using the curl command: 4.11. Create virtual machine disk The following example creates an 8 GiB copy-on-write disk for the example virtual machine. The request should be like this: The request body should be an object of type DiskAttachment describing the disk and how it will be attached to the virtual machine: <disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <active>true</active> <disk> <description>My disk</description> <format>cow</format> <name>mydisk</name> <provisioned_size>8589934592</provisioned_size> <storage_domains> <storage_domain> <name>mydata</name> </storage_domain> </storage_domains> </disk> </disk_attachment> The same request, using the curl command: The storage_domains attribute tells the API to store the disk on the mydata storage domain. 4.12. Attach ISO image to virtual machine The boot media for the following virtual machine example requires a CD-ROM or DVD ISO image for an operating system installation. This example uses a CentOS 7 image. ISO images must be available in the myisos ISO domain for the virtual machines to use. You can use Section 6.114, "ImageTransfers" to create an image transfer and Section 6.113, "ImageTransfer" to upload the ISO image. Once the ISO image is uploaded, an API can be used to request the list of files from the ISO storage domain: The same request, using the curl command: The server returns the following list of objects of type File , one for each available ISO (or floppy) image: <files> <file href="..." id="CentOS-7-x86_64-Minimal.iso"> <name>CentOS-7-x86_64-Minimal.iso</name> </file> ... </files> An API user attaches the CentOS-7-x86_64-Minimal.iso to the example virtual machine. Attaching an ISO image is equivalent to using the Change CD button in the administration or user portal applications. The request should be like this: The request body should be an object of type Cdrom containing an inner file attribute to indicate the identifier of the ISO (or floppy) image: <cdrom> <file id="CentOS-7-x86_64-Minimal.iso"/> </cdrom> The same request, using the curl command: For more details see the documentation of the service that manages virtual machine CD-ROMS. 4.13. Start the virtual machine The virtual environment is complete and the virtual machine contains all necessary components to function. This example starts the virtual machine using the start method. The request should be like this: The request body should be like this: <action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action> The same request, using the curl command: The additional request body sets the virtual machine's boot device to CD-ROM for this boot only. This enables the virtual machine to install the operating system from the attached ISO image. The boot device reverts back to disk for all future boots.
[ "GET /ovirt-engine/api HTTP/1.1 Version: 4 Accept: application/xml", "GET /ovirt-engine/api/v4 HTTP/1.1 Accept: application/xml", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --request GET --header 'Version: 4' --header 'Accept: application/xml' --user 'admin@internal:mypassword' https://myengine.example.com/ovirt-engine/api", "<api> <link href=\"/ovirt-engine/api/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters\" rel=\"datacenters\"/> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>0</build> <full_version>4.0.0-0.0.el7</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> <special_objects> <blank_template href=\"...\" id=\"...\"/> <root_tag href=\"...\" id=\"...\"/> </special_objects> <summary> <hosts> <active>23</active> <total>30</total> </hosts> <storage_domains> <active>5</active> <total>6</total> </storage_domains> <users> <active>12</active> <total>102</total> </users> <vms> <active>253</active> <total>545</total> </vms> </summary> <time>2016-10-06T15:38:18.548+02:00</time> </api>", "echo \"ENGINE_API_DEFAULT_VERSION=3\" > /etc/ovirt-engine/engine.conf.d/99-set-default-version.conf systemctl restart ovirt-engine", "GET /ovirt-engine/api/datacenters HTTP/1.1 Accept: application/xml", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --request GET --header 'Version: 4' --header 'Accept: application/xml' --user 'admin@internal:mypassword' https://myengine.example.com/ovirt-engine/api/datacenters", "<data_centers> <data_center href=\"/ovirt-engine/api/datacenters/001\" id=\"001\"> <name>Default</name> <description>The default Data Center</description> <link href=\"/ovirt-engine/api/datacenters/001/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters/001/storagedomains\" rel=\"storagedomains\"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> </data_center> </data_centers>", "<link href=\"/ovirt-engine/api/datacenters/001/storagedomains\" rel=\"storagedomains\"/>", "GET /ovirt-engine/api/clusters HTTP/1.1 Accept: application/xml", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --request GET --header 'Version: 4' --header 'Accept: application/xml' --user 'admin@internal:mypassword' https://myengine.example.com/ovirt-engine/api/clusters", "<clusters> <cluster href=\"/ovirt-engine/api/clusters/002\" id=\"002\"> <name>Default</name> <description>The default server cluster</description> <link href=\"/ovirt-engine/api/clusters/002/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/clusters/002\" rel=\"permissions\"/> <cpu> <architecture>x86_64</architecture> <type>Intel Conroe Family</type> </cpu> <version> <major>4</major> <minor>0</minor> </version> <data_center href=\"/ovirt-engine/api/datacenters/001\" id=\"001\"/> </cluster> </clusters>", "<data_center href=\"/ovirt-engine/api/datacenters/001\" id=\"001\"/>", "GET /ovirt-engine/api/networks HTTP/1.1 Accept: application/xml", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --request GET --header 'Version: 4' --header 'Accept: application/xml' --user 'admin@internal:mypassword' https://myengine.example.com/ovirt-engine/api/networks", "<networks> <network href=\"/ovirt-engine/api/networks/003\" id=\"003\"> <name>ovirtmgmt</name> <description>Management Network</description> <link href=\"/ovirt-engine/api/networks/003/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/networks/003/vnicprofiles\" rel=\"vnicprofiles\"/> <link href=\"/ovirt-engine/api/networks/003/networklabels\" rel=\"networklabels\"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href=\"/ovirt-engine/api/datacenters/001\" id=\"001\"/> </network> </networks>", "GET /ovirt-engine/api/hosts HTTP/1.1 Accept: application/xml", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --request GET --header 'Version: 4' --header 'Accept: application/xml' --user 'admin@internal:mypassword' https://myengine.example.com/ovirt-engine/api/hosts", "<hosts> <host href=\"/ovirt-engine/api/hosts/004\" id=\"004\"> <name>myhost</name> <link href=\"/ovirt-engine/api/hosts/004/nics\" rel=\"nics\"/> <address>node40.example.com</address> <cpu> <name>Intel Core Processor (Haswell, no TSX)</name> <speed>3600</speed> <topology> <cores>1</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> <memory>8371830784</memory> <os> <type>RHEL</type> <version> <full_version>7 - 2.1511.el7.centos.2.10</full_version> <major>7</major> </version> </os> <port>54321</port> <status>up</status> <cluster href=\"/ovirt-engine/api/clusters/002\" id=\"002\"/> </host> </hosts>", "POST /ovirt-engine/api/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml", "<storage_domain> <name>mydata</name> <type>data</type> <description>My data</description> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <storage_domain> <name>mydata</name> <description>My data</description> <type>data</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain> ' https://myengine.example.com/ovirt-engine/api/storagedomains", "<storage_domain href=\"/ovirt-engine/api/storagedomains/005\" id=\"005\"> <name>mydata</name> <description>My data</description> <available>42949672960</available> <committed>0</committed> <master>false</master> <status>unattached</status> <storage> <address>mynfs.example.com</address> <path>/exports/mydata</path> <type>nfs</type> </storage> <storage_format>v3</storage_format> <type>data</type> <used>9663676416</used> </storage_domain>", "POST /ovirt-engine/api/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml", "<storage_domain> <name>myisos</name> <description>My ISOs</description> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <storage_domain> <name>myisos</name> <description>My ISOs</description> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain> ' https://myengine.example.com/ovirt-engine/api/storagedomains", "<storage_domain href=\"/ovirt-engine/api/storagedomains/006\" id=\"006\"> <name>myiso</name> <description>My ISOs</description> <available>42949672960</available> <committed>0</committed> <master>false</master> <status>unattached</status> <storage> <address>mynfs.example.com</address> <path>/exports/myisos</path> <type>nfs</type> </storage> <storage_format>v1</storage_format> <type>iso</type> <used>9663676416</used> </storage_domain>", "POST /ovirt-engine/api/datacenters/001/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml", "<storage_domain> <name>mydata</name> </storage_domain>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <storage_domain> <name>mydata</name> </storage_domain> ' https://myengine.example.com/ovirt-engine/api/datacenters/001/storagedomains", "POST /ovirt-engine/api/datacenters/001/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml", "<storage_domain> <name>myisos</name> </storage_domain>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <storage_domain> <name>myisos</name> </storage_domain> ' https://myengine.example.com/ovirt-engine/api/datacenters/001/storagedomains", "POST /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml Content-type: application/xml", "<vm> <name>myvm</name> <description>My VM</description> <cluster> <name>Default</name> </cluster> <template> <name>Blank</name> </template> <memory>536870912</memory> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> </vm>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <vm> <name>myvm</name> <description>My VM</description> <cluster> <name>Default</name> </cluster> <template> <name>Blank</name> </template> <memory>536870912</memory> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> </vm> ' https://myengine.example.com/ovirt-engine/api/vms", "<vm href=\"/ovirt-engine/api/vms/007\" id=\"007\"> <name>myvm</name> <link href=\"/ovirt-engine/api/vms/007/diskattachments\" rel=\"diskattachments\"/> <link href=\"/ovirt-engine/api/vms/007/nics\" rel=\"nics\"/> <cpu> <architecture>x86_64</architecture> <topology> <cores>1</cores> <sockets>1</sockets> <threads>1</threads> </topology> </cpu> <memory>1073741824</memory> <os> <boot> <devices> <device>hd</device> </devices> </boot> <type>other</type> </os> <type>desktop</type> <cluster href=\"/ovirt-engine/api/clusters/002\" id=\"002\"/> <status>down</status> <original_template href=\"/ovirt-engine/api/templates/000\" id=\"00\"/> <template href=\"/ovirt-engine/api/templates/000\" id=\"000\"/> </vm>", "POST /ovirt-engine/api/vms/007/nics HTTP/1.1 Content-Type: application/xml Accept: application/xml", "<nic> <name>mynic</name> <description>My network interface card</description> </nic>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <nic> <name>mynic</name> <description>My network interface card</description> </nic> ' https://myengine.example.com/ovirt-engine/api/vms/007/nics", "POST /ovirt-engine/api/vms/007/diskattachments HTTP/1.1 Content-Type: application/xml Accept: application/xml", "<disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <active>true</active> <disk> <description>My disk</description> <format>cow</format> <name>mydisk</name> <provisioned_size>8589934592</provisioned_size> <storage_domains> <storage_domain> <name>mydata</name> </storage_domain> </storage_domains> </disk> </disk_attachment>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <active>true</active> <disk> <description>My disk</description> <format>cow</format> <name>mydisk</name> <provisioned_size>8589934592</provisioned_size> <storage_domains> <storage_domain> <name>mydata</name> </storage_domain> </storage_domains> </disk> </disk_attachment> ' https://myengine.example.com/ovirt-engine/api/vms/007/diskattachments", "GET /ovirt-engine/api/storagedomains/006/files HTTP/1.1 Accept: application/xml", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request GET --header 'Version: 4' --header 'Accept: application/xml' https://myengine.example.com/ovirt-engine/api/storagedomains/006/files", "<files> <file href=\"...\" id=\"CentOS-7-x86_64-Minimal.iso\"> <name>CentOS-7-x86_64-Minimal.iso</name> </file> </files>", "PUT /ovirt-engine/api/vms/007/cdroms/00000000-0000-0000-0000-000000000000 HTTP/1.1 Accept: application/xml Content-type: application/xml", "<cdrom> <file id=\"CentOS-7-x86_64-Minimal.iso\"/> </cdrom>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request PUT --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <cdrom> <file id=\"CentOS-7-x86_64-Minimal.iso\"/> </cdrom> ' https://myengine.example.com/ovirt-engine/api/vms/007/cdroms/00000000-0000-0000-0000-000000000000", "POST /ovirt-engine/api/vms/007/start HTTP/1.1 Accept: application/xml Content-type: application/xml", "<action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action>", "curl --cacert '/etc/pki/ovirt-engine/ca.pem' --user 'admin@internal:mypassword' --request POST --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' --data ' <action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action> ' https://myengine.example.com/ovirt-engine/api/vms/007/start" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/documents-004_quick_start_example
20.16. Discarding Blocks Not in Use
20.16. Discarding Blocks Not in Use The virsh domfstrim domain [--minimum bytes ] [--mountpoint mountPoint ] command invokes the fstrim utility on all mounted file systems within a specified running guest virtual machine. This discards blocks not in use by the file system. If the argument --minimum is used, an amount in bytes must be specified. This amount will be sent to the guest kernel as its length of contiguous free range. Values smaller than this amount may be ignored. Increasing this value will create competition with file systems with badly fragmented free space. Note that not all blocks in this case are discarded. The default minimum is zero which means that every free block is discarded. If you increase this value to greater than zero, the fstrim operation will complete more quickly for file systems with badly fragmented free space, although not all blocks will be discarded. If a user only wants to trim one specific mount point, the --mountpoint argument should be used and a mount point should be specified. Example 20.38. How to discard blocks not in use The following example trims the file system running on the guest virtual machine named guest1 : # virsh domfstrim guest1 --minimum 0
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-discarding_blocks_not_in_use
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_and_deploying_service_registry_on_openshift/pr01
Chapter 4. Resolved issues
Chapter 4. Resolved issues There are no resolved issues for this release. For details of any security fixes in this release, see the errata links in Advisories related to this release .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_5_release_notes/resolved_issues
Nodes
Nodes OpenShift Container Platform 4.11 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/nodes/index
Chapter 17. Security
Chapter 17. Security TPM TPM (Trusted Platform Module) hardware can create, store and use RSA keys securely (without ever being exposed in memory), verify a platform's software state using cryptographic hashes and more. The trousers and tpm-tools packages are considered a Technology Preview. Packages: trousers , tpm-tools
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/chap-red_hat_enterprise_linux-6.10_technical_notes-technology_previews-security
Chapter 9. Hosts
Chapter 9. Hosts Table 9.1. Hosts Subcommand Description and tasks hostgroup org loc Create a host group: Add an activation key to a host group: host org loc Create a host (inheriting parameters from a host group): job-template Add a job template for remote execution: job-invocation Invoke a remote job: Monitor the remote job:
[ "hammer hostgroup create --name hg_name --puppet-environment env_name --architecture arch_name --domain domain_name --subnet subnet_name --puppet-proxy proxy_name --puppet-ca-proxy ca-proxy_name --operatingsystem os_name --partition-table table_name --medium medium_name --organization-ids org_ID1,... --location-ids loc_ID1,", "hammer hostgroup set-parameter --hostgroup \"hg_name\" --name \"kt_activation_keys\" --value key_name", "hammer host create --name host_name --hostgroup hg_name --interface=\"primary=true, mac= mac_addr , ip= ip_addr , provision=true\" --organization-id org_ID --location-id loc_ID --ask-root-password yes", "hammer job-template create --file path --name template_name --provider-type SSH --job-category category_name", "hammer job-invocation create --job-template template_name --inputs key1= value,... --search-query query", "hammer job-invocation output --id job_id --host host_name" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cheat_sheet/hosts-1
Chapter 18. Red Hat Enterprise Linux for Real Time
Chapter 18. Red Hat Enterprise Linux for Real Time Red Hat Enterprise Linux for Real Time is a new offering in Red Hat Enterprise Linux 7.1 comprised of a special kernel build and several user space utilities. With this kernel and appropriate system configuration, Red Hat Enterprise Linux for Real Time brings deterministic workloads, which allow users to rely on consistent response times and low and predictable latency. These capabilities are critical in strategic industries such as financial service marketplaces, telecommunications, or medical research. For instructions on how to install Red Hat Enterprise Linux for Real Time, and how to set up and tune the system so that you can take full advantage of this offering, refer to the Red Hat Enterprise Linux for Real Time 7 Installation Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_red_hat_enterprise_linux_for_real_time
Chapter 21. Network [operator.openshift.io/v1]
Chapter 21. Network [operator.openshift.io/v1] Description Network describes the cluster's desired network configuration. It is consumed by the cluster-network-operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 21.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkSpec is the top-level network configuration object. status object NetworkStatus is detailed operator status, which is distilled up to the Network clusteroperator object. 21.1.1. .spec Description NetworkSpec is the top-level network configuration object. Type object Property Type Description additionalNetworks array additionalNetworks is a list of extra networks to make available to pods when multiple networks are enabled. additionalNetworks[] object AdditionalNetworkDefinition configures an extra network that is available but not created by default. Instead, pods must request them by name. type must be specified, along with exactly one "Config" that matches the type. clusterNetwork array clusterNetwork is the IP address pool to use for pod IPs. Some network providers, e.g. OpenShift SDN, support multiple ClusterNetworks. Others only support one. This is equivalent to the cluster-cidr. clusterNetwork[] object ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks defaultNetwork object defaultNetwork is the "default" network that all pods will receive deployKubeProxy boolean deployKubeProxy specifies whether or not a standalone kube-proxy should be deployed by the operator. Some network providers include kube-proxy or similar functionality. If unset, the plugin will attempt to select the correct value, which is false when OpenShift SDN and ovn-kubernetes are used and true otherwise. disableMultiNetwork boolean disableMultiNetwork specifies whether or not multiple pod network support should be disabled. If unset, this property defaults to 'false' and multiple network support is enabled. disableNetworkDiagnostics boolean disableNetworkDiagnostics specifies whether or not PodNetworkConnectivityCheck CRs from a test pod to every node, apiserver and LB should be disabled or not. If unset, this property defaults to 'false' and network diagnostics is enabled. Setting this to 'true' would reduce the additional load of the pods performing the checks. exportNetworkFlows object exportNetworkFlows enables and configures the export of network flow metadata from the pod network by using protocols NetFlow, SFlow or IPFIX. Currently only supported on OVN-Kubernetes plugin. If unset, flows will not be exported to any collector. kubeProxyConfig object kubeProxyConfig lets us configure desired proxy configuration. If not specified, sensible defaults will be chosen by OpenShift directly. Not consumed by all network providers - currently only openshift-sdn. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component migration object migration enables and configures the cluster network migration. The migration procedure allows to change the network type and the MTU. observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". serviceNetwork array (string) serviceNetwork is the ip address pool to use for Service IPs Currently, all existing network providers only support a single value here, but this is an array to allow for growth. unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides useMultiNetworkPolicy boolean useMultiNetworkPolicy enables a controller which allows for MultiNetworkPolicy objects to be used on additional networks as created by Multus CNI. MultiNetworkPolicy are similar to NetworkPolicy objects, but NetworkPolicy objects only apply to the primary interface. With MultiNetworkPolicy, you can control the traffic that a pod can receive over the secondary interfaces. If unset, this property defaults to 'false' and MultiNetworkPolicy objects are ignored. If 'disableMultiNetwork' is 'true' then the value of this field is ignored. 21.1.2. .spec.additionalNetworks Description additionalNetworks is a list of extra networks to make available to pods when multiple networks are enabled. Type array 21.1.3. .spec.additionalNetworks[] Description AdditionalNetworkDefinition configures an extra network that is available but not created by default. Instead, pods must request them by name. type must be specified, along with exactly one "Config" that matches the type. Type object Property Type Description name string name is the name of the network. This will be populated in the resulting CRD This must be unique. namespace string namespace is the namespace of the network. This will be populated in the resulting CRD If not given the network will be created in the default namespace. rawCNIConfig string rawCNIConfig is the raw CNI configuration json to create in the NetworkAttachmentDefinition CRD simpleMacvlanConfig object SimpleMacvlanConfig configures the macvlan interface in case of type:NetworkTypeSimpleMacvlan type string type is the type of network The supported values are NetworkTypeRaw, NetworkTypeSimpleMacvlan 21.1.4. .spec.additionalNetworks[].simpleMacvlanConfig Description SimpleMacvlanConfig configures the macvlan interface in case of type:NetworkTypeSimpleMacvlan Type object Property Type Description ipamConfig object IPAMConfig configures IPAM module will be used for IP Address Management (IPAM). master string master is the host interface to create the macvlan interface from. If not specified, it will be default route interface mode string mode is the macvlan mode: bridge, private, vepa, passthru. The default is bridge mtu integer mtu is the mtu to use for the macvlan interface. if unset, host's kernel will select the value. 21.1.5. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig Description IPAMConfig configures IPAM module will be used for IP Address Management (IPAM). Type object Property Type Description staticIPAMConfig object StaticIPAMConfig configures the static IP address in case of type:IPAMTypeStatic type string Type is the type of IPAM module will be used for IP Address Management(IPAM). The supported values are IPAMTypeDHCP, IPAMTypeStatic 21.1.6. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig Description StaticIPAMConfig configures the static IP address in case of type:IPAMTypeStatic Type object Property Type Description addresses array Addresses configures IP address for the interface addresses[] object StaticIPAMAddresses provides IP address and Gateway for static IPAM addresses dns object DNS configures DNS for the interface routes array Routes configures IP routes for the interface routes[] object StaticIPAMRoutes provides Destination/Gateway pairs for static IPAM routes 21.1.7. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.addresses Description Addresses configures IP address for the interface Type array 21.1.8. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.addresses[] Description StaticIPAMAddresses provides IP address and Gateway for static IPAM addresses Type object Property Type Description address string Address is the IP address in CIDR format gateway string Gateway is IP inside of subnet to designate as the gateway 21.1.9. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.dns Description DNS configures DNS for the interface Type object Property Type Description domain string Domain configures the domainname the local domain used for short hostname lookups nameservers array (string) Nameservers points DNS servers for IP lookup search array (string) Search configures priority ordered search domains for short hostname lookups 21.1.10. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.routes Description Routes configures IP routes for the interface Type array 21.1.11. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.routes[] Description StaticIPAMRoutes provides Destination/Gateway pairs for static IPAM routes Type object Property Type Description destination string Destination points the IP route destination gateway string Gateway is the route's -hop IP address If unset, a default gateway is assumed (as determined by the CNI plugin). 21.1.12. .spec.clusterNetwork Description clusterNetwork is the IP address pool to use for pod IPs. Some network providers, e.g. OpenShift SDN, support multiple ClusterNetworks. Others only support one. This is equivalent to the cluster-cidr. Type array 21.1.13. .spec.clusterNetwork[] Description ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks Type object Property Type Description cidr string hostPrefix integer 21.1.14. .spec.defaultNetwork Description defaultNetwork is the "default" network that all pods will receive Type object Property Type Description kuryrConfig object KuryrConfig configures the kuryr plugin openshiftSDNConfig object openShiftSDNConfig configures the openshift-sdn plugin ovnKubernetesConfig object ovnKubernetesConfig configures the ovn-kubernetes plugin. type string type is the type of network All NetworkTypes are supported except for NetworkTypeRaw 21.1.15. .spec.defaultNetwork.kuryrConfig Description KuryrConfig configures the kuryr plugin Type object Property Type Description controllerProbesPort integer The port kuryr-controller will listen for readiness and liveness requests. daemonProbesPort integer The port kuryr-daemon will listen for readiness and liveness requests. enablePortPoolsPrepopulation boolean enablePortPoolsPrepopulation when true will make Kuryr prepopulate each newly created port pool with a minimum number of ports. Kuryr uses Neutron port pooling to fight the fact that it takes a significant amount of time to create one. It creates a number of ports when the first pod that is configured to use the dedicated network for pods is created in a namespace, and keeps them ready to be attached to pods. Port prepopulation is disabled by default. mtu integer mtu is the MTU that Kuryr should use when creating pod networks in Neutron. The value has to be lower or equal to the MTU of the nodes network and Neutron has to allow creation of tenant networks with such MTU. If unset Pod networks will be created with the same MTU as the nodes network has. This also affects the services network created by cluster-network-operator. openStackServiceNetwork string openStackServiceNetwork contains the CIDR of network from which to allocate IPs for OpenStack Octavia's Amphora VMs. Please note that with Amphora driver Octavia uses two IPs from that network for each loadbalancer - one given by OpenShift and second for VRRP connections. As the first one is managed by OpenShift's and second by Neutron's IPAMs, those need to come from different pools. Therefore openStackServiceNetwork needs to be at least twice the size of serviceNetwork , and whole serviceNetwork must be overlapping with openStackServiceNetwork . cluster-network-operator will then make sure VRRP IPs are taken from the ranges inside openStackServiceNetwork that are not overlapping with serviceNetwork , effectivly preventing conflicts. If not set cluster-network-operator will use serviceNetwork expanded by decrementing the prefix size by 1. poolBatchPorts integer poolBatchPorts sets a number of ports that should be created in a single batch request to extend the port pool. The default is 3. For more information about port pools see enablePortPoolsPrepopulation setting. poolMaxPorts integer poolMaxPorts sets a maximum number of free ports that are being kept in a port pool. If the number of ports exceeds this setting, free ports will get deleted. Setting 0 will disable this upper bound, effectively preventing pools from shrinking and this is the default value. For more information about port pools see enablePortPoolsPrepopulation setting. poolMinPorts integer poolMinPorts sets a minimum number of free ports that should be kept in a port pool. If the number of ports is lower than this setting, new ports will get created and added to pool. The default is 1. For more information about port pools see enablePortPoolsPrepopulation setting. 21.1.16. .spec.defaultNetwork.openshiftSDNConfig Description openShiftSDNConfig configures the openshift-sdn plugin Type object Property Type Description enableUnidling boolean enableUnidling controls whether or not the service proxy will support idling and unidling of services. By default, unidling is enabled. mode string mode is one of "Multitenant", "Subnet", or "NetworkPolicy" mtu integer mtu is the mtu to use for the tunnel interface. Defaults to 1450 if unset. This must be 50 bytes smaller than the machine's uplink. useExternalOpenvswitch boolean useExternalOpenvswitch used to control whether the operator would deploy an OVS DaemonSet itself or expect someone else to start OVS. As of 4.6, OVS is always run as a system service, and this flag is ignored. DEPRECATED: non-functional as of 4.6 vxlanPort integer vxlanPort is the port to use for all vxlan packets. The default is 4789. 21.1.17. .spec.defaultNetwork.ovnKubernetesConfig Description ovnKubernetesConfig configures the ovn-kubernetes plugin. Type object Property Type Description egressIPConfig object egressIPConfig holds the configuration for EgressIP options. gatewayConfig object gatewayConfig holds the configuration for node gateway options. genevePort integer geneve port is the UDP port to be used by geneve encapulation. Default is 6081 hybridOverlayConfig object HybridOverlayConfig configures an additional overlay network for peers that are not using OVN. ipsecConfig object ipsecConfig enables and configures IPsec for pods on the pod network within the cluster. mtu integer mtu is the MTU to use for the tunnel interface. This must be 100 bytes smaller than the uplink mtu. Default is 1400 policyAuditConfig object policyAuditConfig is the configuration for network policy audit events. If unset, reported defaults are used. v4InternalSubnet string v4InternalSubnet is a v4 subnet used internally by ovn-kubernetes in case the default one is being already used by something else. It must not overlap with any other subnet being used by OpenShift or by the node network. The size of the subnet must be larger than the number of nodes. The value cannot be changed after installation. Default is 100.64.0.0/16 v6InternalSubnet string v6InternalSubnet is a v6 subnet used internally by ovn-kubernetes in case the default one is being already used by something else. It must not overlap with any other subnet being used by OpenShift or by the node network. The size of the subnet must be larger than the number of nodes. The value cannot be changed after installation. Default is fd98::/48 21.1.18. .spec.defaultNetwork.ovnKubernetesConfig.egressIPConfig Description egressIPConfig holds the configuration for EgressIP options. Type object Property Type Description reachabilityTotalTimeoutSeconds integer reachabilityTotalTimeout configures the EgressIP node reachability check total timeout in seconds. If the EgressIP node cannot be reached within this timeout, the node is declared down. Setting a large value may cause the EgressIP feature to react slowly to node changes. In particular, it may react slowly for EgressIP nodes that really have a genuine problem and are unreachable. When omitted, this means the user has no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is 1 second. A value of 0 disables the EgressIP node's reachability check. 21.1.19. .spec.defaultNetwork.ovnKubernetesConfig.gatewayConfig Description gatewayConfig holds the configuration for node gateway options. Type object Property Type Description routingViaHost boolean RoutingViaHost allows pod egress traffic to exit via the ovn-k8s-mp0 management port into the host before sending it out. If this is not set, traffic will always egress directly from OVN to outside without touching the host stack. Setting this to true means hardware offload will not be supported. Default is false if GatewayConfig is specified. 21.1.20. .spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig Description HybridOverlayConfig configures an additional overlay network for peers that are not using OVN. Type object Property Type Description hybridClusterNetwork array HybridClusterNetwork defines a network space given to nodes on an additional overlay network. hybridClusterNetwork[] object ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks hybridOverlayVXLANPort integer HybridOverlayVXLANPort defines the VXLAN port number to be used by the additional overlay network. Default is 4789 21.1.21. .spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig.hybridClusterNetwork Description HybridClusterNetwork defines a network space given to nodes on an additional overlay network. Type array 21.1.22. .spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig.hybridClusterNetwork[] Description ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks Type object Property Type Description cidr string hostPrefix integer 21.1.23. .spec.defaultNetwork.ovnKubernetesConfig.ipsecConfig Description ipsecConfig enables and configures IPsec for pods on the pod network within the cluster. Type object 21.1.24. .spec.defaultNetwork.ovnKubernetesConfig.policyAuditConfig Description policyAuditConfig is the configuration for network policy audit events. If unset, reported defaults are used. Type object Property Type Description destination string destination is the location for policy log messages. Regardless of this config, persistent logs will always be dumped to the host at /var/log/ovn/ however Additionally syslog output may be configured as follows. Valid values are: - "libc" to use the libc syslog() function of the host node's journdald process - "udp:host:port" for sending syslog over UDP - "unix:file" for using the UNIX domain socket directly - "null" to discard all messages logged to syslog The default is "null" maxFileSize integer maxFilesSize is the max size an ACL_audit log file is allowed to reach before rotation occurs Units are in MB and the Default is 50MB maxLogFiles integer maxLogFiles specifies the maximum number of ACL_audit log files that can be present. rateLimit integer rateLimit is the approximate maximum number of messages to generate per-second per-node. If unset the default of 20 msg/sec is used. syslogFacility string syslogFacility the RFC5424 facility for generated messages, e.g. "kern". Default is "local0" 21.1.25. .spec.exportNetworkFlows Description exportNetworkFlows enables and configures the export of network flow metadata from the pod network by using protocols NetFlow, SFlow or IPFIX. Currently only supported on OVN-Kubernetes plugin. If unset, flows will not be exported to any collector. Type object Property Type Description ipfix object ipfix defines IPFIX configuration. netFlow object netFlow defines the NetFlow configuration. sFlow object sFlow defines the SFlow configuration. 21.1.26. .spec.exportNetworkFlows.ipfix Description ipfix defines IPFIX configuration. Type object Property Type Description collectors array (string) ipfixCollectors is list of strings formatted as ip:port with a maximum of ten items 21.1.27. .spec.exportNetworkFlows.netFlow Description netFlow defines the NetFlow configuration. Type object Property Type Description collectors array (string) netFlow defines the NetFlow collectors that will consume the flow data exported from OVS. It is a list of strings formatted as ip:port with a maximum of ten items 21.1.28. .spec.exportNetworkFlows.sFlow Description sFlow defines the SFlow configuration. Type object Property Type Description collectors array (string) sFlowCollectors is list of strings formatted as ip:port with a maximum of ten items 21.1.29. .spec.kubeProxyConfig Description kubeProxyConfig lets us configure desired proxy configuration. If not specified, sensible defaults will be chosen by OpenShift directly. Not consumed by all network providers - currently only openshift-sdn. Type object Property Type Description bindAddress string The address to "bind" on Defaults to 0.0.0.0 iptablesSyncPeriod string An internal kube-proxy parameter. In older releases of OCP, this sometimes needed to be adjusted in large clusters for performance reasons, but this is no longer necessary, and there is no reason to change this from the default value. Default: 30s proxyArguments object Any additional arguments to pass to the kubeproxy process proxyArguments{} array (string) ProxyArgumentList is a list of arguments to pass to the kubeproxy process 21.1.30. .spec.kubeProxyConfig.proxyArguments Description Any additional arguments to pass to the kubeproxy process Type object 21.1.31. .spec.migration Description migration enables and configures the cluster network migration. The migration procedure allows to change the network type and the MTU. Type object Property Type Description features object features contains the features migration configuration. Set this to migrate feature configuration when changing the cluster default network provider. if unset, the default operation is to migrate all the configuration of supported features. mtu object mtu contains the MTU migration configuration. Set this to allow changing the MTU values for the default network. If unset, the operation of changing the MTU for the default network will be rejected. networkType string networkType is the target type of network migration. Set this to the target network type to allow changing the default network. If unset, the operation of changing cluster default network plugin will be rejected. The supported values are OpenShiftSDN, OVNKubernetes 21.1.32. .spec.migration.features Description features contains the features migration configuration. Set this to migrate feature configuration when changing the cluster default network provider. if unset, the default operation is to migrate all the configuration of supported features. Type object Property Type Description egressFirewall boolean egressFirewall specifies whether or not the Egress Firewall configuration is migrated automatically when changing the cluster default network provider. If unset, this property defaults to 'true' and Egress Firewall configure is migrated. egressIP boolean egressIP specifies whether or not the Egress IP configuration is migrated automatically when changing the cluster default network provider. If unset, this property defaults to 'true' and Egress IP configure is migrated. multicast boolean multicast specifies whether or not the multicast configuration is migrated automatically when changing the cluster default network provider. If unset, this property defaults to 'true' and multicast configure is migrated. 21.1.33. .spec.migration.mtu Description mtu contains the MTU migration configuration. Set this to allow changing the MTU values for the default network. If unset, the operation of changing the MTU for the default network will be rejected. Type object Property Type Description machine object machine contains MTU migration configuration for the machine's uplink. Needs to be migrated along with the default network MTU unless the current uplink MTU already accommodates the default network MTU. network object network contains information about MTU migration for the default network. Migrations are only allowed to MTU values lower than the machine's uplink MTU by the minimum appropriate offset. 21.1.34. .spec.migration.mtu.machine Description machine contains MTU migration configuration for the machine's uplink. Needs to be migrated along with the default network MTU unless the current uplink MTU already accommodates the default network MTU. Type object Property Type Description from integer from is the MTU to migrate from. to integer to is the MTU to migrate to. 21.1.35. .spec.migration.mtu.network Description network contains information about MTU migration for the default network. Migrations are only allowed to MTU values lower than the machine's uplink MTU by the minimum appropriate offset. Type object Property Type Description from integer from is the MTU to migrate from. to integer to is the MTU to migrate to. 21.1.36. .status Description NetworkStatus is detailed operator status, which is distilled up to the Network clusteroperator object. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 21.1.37. .status.conditions Description conditions is a list of conditions and their status Type array 21.1.38. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 21.1.39. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 21.1.40. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 21.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/networks DELETE : delete collection of Network GET : list objects of kind Network POST : create a Network /apis/operator.openshift.io/v1/networks/{name} DELETE : delete a Network GET : read the specified Network PATCH : partially update the specified Network PUT : replace the specified Network 21.2.1. /apis/operator.openshift.io/v1/networks Table 21.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Network Table 21.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 21.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Network Table 21.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 21.5. HTTP responses HTTP code Reponse body 200 - OK NetworkList schema 401 - Unauthorized Empty HTTP method POST Description create a Network Table 21.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.7. Body parameters Parameter Type Description body Network schema Table 21.8. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 202 - Accepted Network schema 401 - Unauthorized Empty 21.2.2. /apis/operator.openshift.io/v1/networks/{name} Table 21.9. Global path parameters Parameter Type Description name string name of the Network Table 21.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Network Table 21.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 21.12. Body parameters Parameter Type Description body DeleteOptions schema Table 21.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Network Table 21.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 21.15. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Network Table 21.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.17. Body parameters Parameter Type Description body Patch schema Table 21.18. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Network Table 21.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.20. Body parameters Parameter Type Description body Network schema Table 21.21. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/network-operator-openshift-io-v1
5.4.3.3. Repairing a Mirrored Logical Device
5.4.3.3. Repairing a Mirrored Logical Device You can use the lvconvert --repair command to repair a mirror after a disk failure. This brings the mirror back into a consistent state. The lvconvert --repair command is an interactive command that prompts you to indicate whether you want the system to attempt to replace any failed devices. To skip the prompts and replace all of the failed devices, specify the -y option on the command line. To skip the prompts and replace none of the failed devices, specify the -f option on the command line. To skip the prompts and still indicate different replacement policies for the mirror image and the mirror log, you can specify the --use-policies argument to use the device replacement policies specified by the mirror_log_fault_policy and mirror_device_fault_policy parameters in the lvm.conf file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_repair
Chapter 2. Uploading current system data to Insights
Chapter 2. Uploading current system data to Insights Whether you are using the compliance service to view system compliance status, remediate issues, or report status to stakeholders, upload current data from your systems to see the most up-to-date information. Procedure Run the following command on each system to upload current data to Insights for Red Hat Enterprise Linux: [root@server ~]# insights-client --compliance
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports_with_fedramp/assembly-compl-uploading-current-data-systems
Installing and using Red Hat build of OpenJDK 17 for Windows
Installing and using Red Hat build of OpenJDK 17 for Windows Red Hat build of OpenJDK 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/installing_and_using_red_hat_build_of_openjdk_17_for_windows/index
Installing on GCP
Installing on GCP OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_gcp/index
Chapter 5. Building an OSGi Bundle
Chapter 5. Building an OSGi Bundle Abstract This chapter describes how to build an OSGi bundle using Maven. For building bundles, the Maven bundle plug-in plays a key role, because it enables you to automate the generation of OSGi bundle headers (which would otherwise be a tedious task). Maven archetypes, which generate a complete sample project, can also provide a starting point for your bundle projects. 5.1. Generating a Bundle Project 5.1.1. Generating bundle projects with Maven archetypes To help you get started quickly, you can invoke a Maven archetype to generate the initial outline of a Maven project (a Maven archetype is analogous to a project wizard). The following Maven archetype generates a project for building OSGi bundles. 5.1.2. Apache Camel archetype The Apache Camel OSGi archetype creates a project for building a route that can be deployed into the OSGi container. Following example shows how to generate a camel-blueprint project using the Maven archetype command with the coordinates, GroupId : ArtifactId : Version , . After running this command, Maven prompts you to specify the GroupId , ArtifactId , and Version . 5.1.3. Building the bundle By default, the preceding archetypes create a project in a new directory, whose name is the same as the specified artifact ID, ArtifactId . To build the bundle defined by the new project, open a command prompt, go to the project directory (that is, the directory containing the pom.xml file), and enter the following Maven command: The effect of this command is to compile all of the Java source files, to generate a bundle JAR under the ArtifactId /target directory, and then to install the generated JAR in the local Maven repository. 5.2. Modifying an Existing Maven Project 5.2.1. Overview If you already have a Maven project and you want to modify it so that it generates an OSGi bundle, perform the following steps: Section 5.2.2, "Change the package type to bundle" . Section 5.2.3, "Add the bundle plug-in to your POM" . Section 5.2.4, "Customize the bundle plug-in" . Section 5.2.5, "Customize the JDK compiler version" . 5.2.2. Change the package type to bundle Configure Maven to generate an OSGi bundle by changing the package type to bundle in your project's pom.xml file. Change the contents of the packaging element to bundle , as shown in the following example: The effect of this setting is to select the Maven bundle plug-in, maven-bundle-plugin , to perform packaging for this project. This setting on its own, however, has no effect until you explicitly add the bundle plug-in to your POM. 5.2.3. Add the bundle plug-in to your POM To add the Maven bundle plug-in, copy and paste the following sample plugin element into the project/build/plugins section of your project's pom.xml file: Where the bundle plug-in is configured by the settings in the instructions element. 5.2.4. Customize the bundle plug-in For some specific recommendations on configuring the bundle plug-in for Apache CXF, see Section 5.3, "Packaging a Web Service in a Bundle" . 5.2.5. Customize the JDK compiler version It is almost always necessary to specify the JDK version in your POM file. If your code uses any modern features of the Java language-such as generics, static imports, and so on-and you have not customized the JDK version in the POM, Maven will fail to compile your source code. It is not sufficient to set the JAVA_HOME and the PATH environment variables to the correct values for your JDK, you must also modify the POM file. To configure your POM file, so that it accepts the Java language features introduced in JDK 1.8, add the following maven-compiler-plugin plug-in settings to your POM (if they are not already present): 5.3. Packaging a Web Service in a Bundle 5.3.1. Overview This section explains how to modify an existing Maven project for a Apache CXF application, so that the project generates an OSGi bundle suitable for deployment in the Red Hat Fuse OSGi container. To convert the Maven project, you need to modify the project's POM file and the project's Blueprint file(s) (located in META-INF/spring ). 5.3.2. Modifying the POM file to generate a bundle To configure a Maven POM file to generate a bundle, there are essentially two changes you need to make: change the POM's package type to bundle ; and add the Maven bundle plug-in to your POM. For details, see Section 5.1, "Generating a Bundle Project" . 5.3.3. Mandatory import packages In order for your application to use the Apache CXF components, you need to import their packages into the application's bundle. Because of the complex nature of the dependencies in Apache CXF, you cannot rely on the Maven bundle plug-in, or the bnd tool, to automatically determine the needed imports. You will need to explicitly declare them. You need to import the following packages into your bundle: 5.3.4. Sample Maven bundle plug-in instructions Example 5.1, "Configuration of Mandatory Import Packages" shows how to configure the Maven bundle plug-in in your POM to import the mandatory packages. The mandatory import packages appear as a comma-separated list inside the Import-Package element. Note the appearance of the wildcard, * , as the last element of the list. The wildcard ensures that the Java source files from the current bundle are scanned to discover what additional packages need to be imported. Example 5.1. Configuration of Mandatory Import Packages 5.3.5. Add a code generation plug-in A Web services project typically requires code to be generated. Apache CXF provides two Maven plug-ins for the JAX-WS front-end, which enable tyou to integrate the code generation step into your build. The choice of plug-in depends on whether you develop your service using the Java-first approach or the WSDL-first approach, as follows: Java-first approach -use the cxf-java2ws-plugin plug-in. WSDL-first approach -use the cxf-codegen-plugin plug-in. 5.3.6. OSGi configuration properties The OSGi Configuration Admin service defines a mechanism for passing configuration settings to an OSGi bundle. You do not have to use this service for configuration, but it is typically the most convenient way of configuring bundle applications. Blueprint provides support for OSGi configuration, enabling you to substitute variables in a Blueprint file using values obtained from the OSGi Configuration Admin service. For details of how to use OSGi configuration properties, see Section 5.3.7, "Configuring the Bundle Plug-In" and Section 9.6, "Add OSGi configurations to the feature" . 5.3.7. Configuring the Bundle Plug-In Overview A bundle plug-in requires very little information to function. All of the required properties use default settings to generate a valid OSGi bundle. While you can create a valid bundle using just the default values, you will probably want to modify some of the values. You can specify most of the properties inside the plug-in's instructions element. Configuration properties Some of the commonly used configuration properties are: Bundle-SymbolicName Bundle-Name Bundle-Version Export-Package Private-Package Import-Package Setting a bundle's symbolic name By default, the bundle plug-in sets the value for the Bundle-SymbolicName property to groupId + "." + artifactId , with the following exceptions: If groupId has only one section (no dots), the first package name with classes is returned. For example, if the group Id is commons-logging:commons-logging , the bundle's symbolic name is org.apache.commons.logging . If artifactId is equal to the last section of groupId , then groupId is used. For example, if the POM specifies the group ID and artifact ID as org.apache.maven:maven , the bundle's symbolic name is org.apache.maven . If artifactId starts with the last section of groupId , that portion is removed. For example, if the POM specifies the group ID and artifact ID as org.apache.maven:maven-core , the bundle's symbolic name is org.apache.maven.core . To specify your own value for the bundle's symbolic name, add a Bundle-SymbolicName child in the plug-in's instructions element, as shown in Example 5.2, "Setting a bundle's symbolic name" . Example 5.2. Setting a bundle's symbolic name Setting a bundle's name By default, a bundle's name is set to USD{project.name} . To specify your own value for the bundle's name, add a Bundle-Name child to the plug-in's instructions element, as shown in Example 5.3, "Setting a bundle's name" . Example 5.3. Setting a bundle's name Setting a bundle's version By default, a bundle's version is set to USD{project.version} . Any dashes ( - ) are replaced with dots ( . ) and the number is padded up to four digits. For example, 4.2-SNAPSHOT becomes 4.2.0.SNAPSHOT . To specify your own value for the bundle's version, add a Bundle-Version child to the plug-in's instructions element, as shown in Example 5.4, "Setting a bundle's version" . Example 5.4. Setting a bundle's version Specifying exported packages By default, the OSGi manifest's Export-Package list is populated by all of the packages in your local Java source code (under src/main/java ), except for the default package, . , and any packages containing .impl or .internal . Important If you use a Private-Package element in your plug-in configuration and you do not specify a list of packages to export, the default behavior includes only the packages listed in the Private-Package element in the bundle. No packages are exported. The default behavior can result in very large packages and in exporting packages that should be kept private. To change the list of exported packages you can add an Export-Package child to the plug-in's instructions element. The Export-Package element specifies a list of packages that are to be included in the bundle and that are to be exported. The package names can be specified using the * wildcard symbol. For example, the entry com.fuse.demo.* includes all packages on the project's classpath that start with com.fuse.demo . You can specify packages to be excluded be prefixing the entry with ! . For example, the entry !com.fuse.demo.private excludes the package com.fuse.demo.private . When excluding packages, the order of entries in the list is important. The list is processed in order from the beginning and any subsequent contradicting entries are ignored. For example, to include all packages starting with com.fuse.demo except the package com.fuse.demo.private , list the packages using: However, if you list the packages using com.fuse.demo.*,!com.fuse.demo.private , then com.fuse.demo.private is included in the bundle because it matches the first pattern. Specifying private packages If you want to specify a list of packages to include in a bundle without exporting them, you can add a Private-Package instruction to the bundle plug-in configuration. By default, if you do not specify a Private-Package instruction, all packages in your local Java source are included in the bundle. Important If a package matches an entry in both the Private-Package element and the Export-Package element, the Export-Package element takes precedence. The package is added to the bundle and exported. The Private-Package element works similarly to the Export-Package element in that you specify a list of packages to be included in the bundle. The bundle plug-in uses the list to find all classes on the project's classpath that are to be included in the bundle. These packages are packaged in the bundle, but not exported (unless they are also selected by the Export-Package instruction). Example 5.5, "Including a private package in a bundle" shows the configuration for including a private package in a bundle Example 5.5. Including a private package in a bundle Specifying imported packages By default, the bundle plug-in populates the OSGi manifest's Import-Package property with a list of all the packages referred to by the contents of the bundle. While the default behavior is typically sufficient for most projects, you might find instances where you want to import packages that are not automatically added to the list. The default behavior can also result in unwanted packages being imported. To specify a list of packages to be imported by the bundle, add an Import-Package child to the plug-in's instructions element. The syntax for the package list is the same as for the Export-Package element and the Private-Package element. Important When you use the Import-Package element, the plug-in does not automatically scan the bundle's contents to determine if there are any required imports. To ensure that the contents of the bundle are scanned, you must place an * as the last entry in the package list. Example 5.6, "Specifying the packages imported by a bundle" shows the configuration for specifying the packages imported by a bundle Example 5.6. Specifying the packages imported by a bundle More information For more information on configuring a bundle plug-in, see: olink:OsgiDependencies/OsgiDependencies Apache Felix documentation Peter Kriens' aQute Software Consultancy web site 5.3.8. OSGI configAdmin file naming convention PID strings (symbolic-name syntax) allow hyphens in the OSGI specification. However, hyphens are interpreted by Apache Felix.fileinstall and config:edit shell commands to differentiate a "managed service" and "managed service factory". Therefore, it is recommended to not use hyphens elsewhere in a PID string. Note The Configuration file names are related to the PID and factory PID.
[ "mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-blueprint -DarchetypeVersion=2.23.2.fuse-7_13_0-00013-redhat-00001", "mvn install", "<project ... > <packaging> bundle </packaging> </project>", "<project ... > <build> <defaultGoal>install</defaultGoal> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>3.3.0</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>USD{project.groupId}.USD{project.artifactId} </Bundle-SymbolicName> <Import-Package>*</Import-Package> </instructions> </configuration> </plugin> </plugins> </build> </project>", "<project ... > <build> <defaultGoal>install</defaultGoal> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> </project>", "javax.jws javax.wsdl javax.xml.bind javax.xml.bind.annotation javax.xml.namespace javax.xml.ws org.apache.cxf.bus org.apache.cxf.bus.spring org.apache.cxf.bus.resource org.apache.cxf.configuration.spring org.apache.cxf.resource org.apache.cxf.jaxws org.springframework.beans.factory.config", "<project ... > <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Import-Package> javax.jws, javax.wsdl, javax.xml.bind, javax.xml.bind.annotation, javax.xml.namespace, javax.xml.ws, org.apache.cxf.bus, org.apache.cxf.bus.spring, org.apache.cxf.bus.resource, org.apache.cxf.configuration.spring, org.apache.cxf.resource, org.apache.cxf.jaxws, org.springframework.beans.factory.config, * </Import-Package> </instructions> </configuration> </plugin> </plugins> </build> </project>", "<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-SymbolicName>USD{project.artifactId}</Bundle-SymbolicName> </instructions> </configuration> </plugin>", "<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-Name>JoeFred</Bundle-Name> </instructions> </configuration> </plugin>", "<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-Version>1.0.3.1</Bundle-Version> </instructions> </configuration> </plugin>", "!com.fuse.demo.private,com.fuse.demo.*", "<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Private-Package>org.apache.cxf.wsdlFirst.impl</Private-Package> </instructions> </configuration> </plugin>", "<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Import-Package>javax.jws, javax.wsdl, org.apache.cxf.bus, org.apache.cxf.bus.spring, org.apache.cxf.bus.resource, org.apache.cxf.configuration.spring, org.apache.cxf.resource, org.springframework.beans.factory.config, * </Import-Package> </instructions> </configuration> </plugin>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/BuildBundle
Chapter 10. Managing Geo-replication
Chapter 10. Managing Geo-replication This section introduces geo-replication, illustrates the various deployment scenarios, and explains how to configure geo-replication and mirroring. 10.1. About Geo-replication Geo-replication provides a distributed, continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet. Geo-replication uses a master-slave model, where replication and mirroring occurs between the following partners: Master - the primary Red Hat Gluster Storage volume. Slave - a secondary Red Hat Gluster Storage volume. A slave volume can be a volume on a remote host, such as remote-host::volname .
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_geo-replication
3.3. Installing guest agents and drivers
3.3. Installing guest agents and drivers 3.3.1. Red Hat Virtualization Guest agents, tools, and drivers The Red Hat Virtualization guest agents, tools, and drivers provide additional functionality for virtual machines, such as gracefully shutting down or rebooting virtual machines from the VM Portal and Administration Portal. The tools and agents also provide information for virtual machines, including: Resource usage IP addresses The guest agents, tools and drivers are distributed as an ISO file that you can attach to virtual machines. This ISO file is packaged as an RPM file that you can install and upgrade from the Manager machine. You need to install the guest agents and drivers on a virtual machine to enable this functionality for that machine. Table 3.1. Red Hat Virtualization Guest drivers Driver Description Works on virtio-net Paravirtualized network driver provides enhanced performance over emulated devices like rtl. Server and Desktop. virtio-block Paravirtualized HDD driver offers increased I/O performance over emulated devices like IDE by optimizing the coordination and communication between the virtual machine and the hypervisor. The driver complements the software implementation of the virtio-device used by the host to play the role of a hardware device. Server and Desktop. virtio-scsi Paravirtualized iSCSI HDD driver offers similar functionality to the virtio-block device, with some additional enhancements. In particular, this driver supports adding hundreds of devices, and names devices using the standard SCSI device naming scheme. Server and Desktop. virtio-serial Virtio-serial provides support for multiple serial ports. The improved performance is used for fast communication between the virtual machine and the host that avoids network complications. This fast communication is required for the guest agents and for other features such as clipboard copy-paste between the virtual machine and the host and logging. Server and Desktop. virtio-balloon Virtio-balloon is used to control the amount of memory a virtual machine actually accesses. It offers improved memory overcommitment. Server and Desktop. qxl A paravirtualized display driver reduces CPU usage on the host and provides better performance through reduced network bandwidth on most workloads. Server and Desktop. Table 3.2. Red Hat Virtualization Guest agents and tools Guest agent/tool Description Works on qemu-guest-agent Used instead of ovirt-guest-agent-common on Red Hat Enterprise Linux 8 virtual machines. It is installed and enabled by default. Server and Desktop. spice-agent The SPICE agent supports multiple monitors and is responsible for client-mouse-mode support to provide a better user experience and improved responsiveness than the QEMU emulation. Cursor capture is not needed in client-mouse-mode. The SPICE agent reduces bandwidth usage when used over a wide area network by reducing the display level, including color depth, disabling wallpaper, font smoothing, and animation. The SPICE agent enables clipboard support allowing cut and paste operations for both text and images between client and virtual machine, and automatic guest display setting according to client-side settings. On Windows-based virtual machines, the SPICE agent consists of vdservice and vdagent. Server and Desktop. 3.3.2. Installing the guest agents, tools, and drivers on Windows Procedure To install the guest agents, tools, and drivers on a Windows virtual machine, complete the following steps: On the Manager machine, install the virtio-win package: # dnf install virtio-win* After you install the package, the ISO file is located in /usr/share/virtio-win/virtio-win _version .iso on the Manager machine. Upload virtio-win _version .iso to a data storage domain. See Uploading Images to a Data Storage Domain in the Administration Guide for details. In the Administration or VM Portal, if the virtual machine is running, use the Change CD button to attach the virtio-win _version .iso file to each of your virtual machines. If the virtual machine is powered off, click the Run Once button and attach the ISO as a CD. Log in to the virtual machine. Select the CD Drive containing the virtio-win _version .iso file. You can complete the installation with either the GUI or the command line. Run the installer. To install with the GUI, complete the following steps Double-click virtio-win-guest-tools.exe . Click at the welcome screen. Follow the prompts in the installation wizard. When installation is complete, select Yes, I want to restart my computer now and click Finish to apply the changes. To install silently with the command line, complete the following steps Open a command prompt with Administrator privileges. Enter the msiexec command: D:\ msiexec /i " PATH_TO_MSI " /qn [/l*v " PATH_TO_LOG "][/norestart] ADDLOCAL=ALL Other possible values for ADDLOCAL are listed below. For example, to run the installation when virtio-win-gt-x64.msi is on the D:\ drive, without saving the log, and then immediately restart the virtual machine, enter the following command: D:\ msiexec /i "virtio-win-gt-x64.msi" /qn ADDLOCAL=ALL After installation completes, the guest agents and drivers pass usage information to the Red Hat Virtualization Manager and enable you to access USB devices and other functionality. 3.3.3. Values for ADDLOCAL to customize virtio-win command-line installation When installing virtio-win-gt-x64.msi or virtio-win-gt-x32.msi with the command line, you can install any one driver, or any combination of drivers. You can also install specific agents, but you must also install each agent's corresponding drivers. The ADDLOCAL parameter of the msiexec command enables you to specify which drivers or agents to install. ADDLOCAL=ALL installs all drivers and agents. Other values are listed in the following tables. Table 3.3. Possible values for ADDLOCAL to install drivers Value for ADDLOCAL Driver Name Description FE_network_driver virtio-net Paravirtualized network driver provides enhanced performance over emulated devices like rtl. FE_balloon_driver virtio-balloon Controls the amount of memory a virtual machine actually accesses. It offers improved memory overcommitment. FE_pvpanic_driver pvpanic QEMU pvpanic device driver. FE_qemufwcfg_driver qemufwcfg QEMU FWCfg device driver. FE_qemupciserial_driver qemupciserial QEMU PCI serial device driver. FE_spice_driver Spice Driver A paravirtualized display driver reduces CPU usage on the host and provides better performance through reduced network bandwidth on most workloads. FE_vioinput_driver vioinput VirtIO Input Driver. FE_viorng_driver viorng VirtIO RNG device driver. FE_vioscsi_driver vioscsi VirtIO SCSI pass-through controller. FE_vioserial_driver vioserial VirtIO Serial device driver. FE_viostor_driver viostor VirtIO Block driver. Table 3.4. Possible values for ADDLOCAL to install agents and required corresponding drivers Agent Description Corresponding driver(s) Value for ADDLOCAL Spice Agent Supports multiple monitors, responsible for client-mouse-mode support, reduces bandwidth usage, enables clipboard support between client and virtual machine, provide a better user experience and improved responsiveness. vioserial and Spice driver FE_spice_Agent,FE_vioserial_driver,FE_spice_driver Examples The following command installs only the VirtIO SCSI pass-through controller, the VirtIO Serial device driver, and the VirtIO Block driver: D:\ msiexec /i "virtio-win-gt-x64.msi" /qn ADDLOCAL=`FE_vioscsi_driver,FE_vioserial_driver,FE_viostor_driver The following command installs only the Spice Agent and its required corresponding drivers: D:\ msiexec /i "virtio-win-gt-x64.msi" /qn ADDLOCAL = FE_spice_Agent,FE_vioserial_driver,FE_spice_driver Additional resources Updating Win Guest Drivers with Windows Updates Updating the Guest Agents and Drivers on Windows The Microsoft Developer website: Windows Installer Command-Line Options for the Windows installer Property Reference for the Windows installer
[ "dnf install virtio-win*", "D:\\ msiexec /i \" PATH_TO_MSI \" /qn [/l*v \" PATH_TO_LOG \"][/norestart] ADDLOCAL=ALL", "D:\\ msiexec /i \"virtio-win-gt-x64.msi\" /qn ADDLOCAL=ALL", "D:\\ msiexec /i \"virtio-win-gt-x64.msi\" /qn ADDLOCAL=`FE_vioscsi_driver,FE_vioserial_driver,FE_viostor_driver", "D:\\ msiexec /i \"virtio-win-gt-x64.msi\" /qn ADDLOCAL = FE_spice_Agent,FE_vioserial_driver,FE_spice_driver" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Installing_Guest_Agents_and_Drivers_Windows
Chapter 27. Configuring ingress cluster traffic
Chapter 27. Configuring ingress cluster traffic 27.1. Configuring ingress cluster traffic overview OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster. The methods are recommended, in order or preference: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller. Otherwise, use a Load Balancer, an External IP, or a NodePort . Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). Automatically assign an external IP using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Most cloud platforms offer a method to start a service with a load-balancer IP address. About MetalLB and the MetalLB Operator Allows traffic to a specific IP address or address from a pool on the machine network. For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 27.1.1. Comparision: Fault tolerant access to external IP addresses For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration. The following features provide fault tolerant access to an external IP address. IP failover IP failover manages a pool of virtual IP address for a set of nodes. It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP). IP failover is a layer 2 mechanism only and relies on multicast. Multicast can have disadvantages for some networks. MetalLB MetalLB has a layer 2 mode, but it does not use multicast. Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node. Manually assigning external IP addresses You can configure your cluster with an IP address block that is used to assign external IP addresses to services. By default, this feature is disabled. This feature is flexible, but places the largest burden on the cluster or network administrator. The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes. 27.2. Configuring ExternalIPs for services As a cluster administrator, you can designate an IP address block that is external to the cluster that can send traffic to services in the cluster. This functionality is generally most useful for clusters installed on bare-metal hardware. 27.2.1. Prerequisites Your network infrastructure must route traffic for the external IP addresses to your cluster. 27.2.2. About ExternalIP For non-cloud environments, OpenShift Container Platform supports the use of the ExternalIP facility to specify external IP addresses in the spec.externalIPs[] parameter of the Service object. A service configured with an ExternalIP functions similarly to a service with type=NodePort , whereby you traffic directs to a local node for load balancing. Important For cloud environments, use the load balancer services for automatic deployment of a cloud load balancer to target the endpoints of a service. After you specify a value for the parameter, OpenShift Container Platform assigns an additional virtual IP address to the service. The IP address can exist outside of the service network that you defined for your cluster. Warning Because ExternalIP is disabled by default, enabling the ExternalIP functionality might introduce security risks for the service, because in-cluster traffic to an external IP address is directed to that service. This configuration means that cluster users could intercept sensitive traffic destined for external resources. You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service in the following ways: Automatic assignment of an external IP OpenShift Container Platform automatically assigns an IP address from the autoAssignCIDRs CIDR block to the spec.externalIPs[] array when you create a Service object with spec.type=LoadBalancer set. For this configuration, OpenShift Container Platform implements a cloud version of the load balancer service type and assigns IP addresses to the services. Automatic assignment is disabled by default and must be configured by a cluster administrator as described in the "Configuration for ExternalIP" section. Manual assignment of an external IP OpenShift Container Platform uses the IP addresses assigned to the spec.externalIPs[] array when you create a Service object. You cannot specify an IP address that is already in use by another service. After using either the MetalLB implementation or an IP failover deployment to host external IP address blocks, you must configure your networking infrastructure to ensure that the external IP address blocks are routed to your cluster. This configuration means that the IP address is not configured in the network interfaces from nodes. To handle the traffic, you must configure the routing and access to the external IP by using a method, such as static Address Resolution Protocol (ARP) entries. OpenShift Container Platform extends the ExternalIP functionality in Kubernetes by adding the following capabilities: Restrictions on the use of external IP addresses by users through a configurable policy Allocation of an external IP address automatically to a service upon request 27.2.3. Additional resources Configuring IP failover About MetalLB and the MetalLB Operator 27.2.4. Configuration for ExternalIP Use of an external IP address in OpenShift Container Platform is governed by the following parameters in the Network.config.openshift.io custom resource (CR) that is named cluster : spec.externalIP.autoAssignCIDRs defines an IP address block used by the load balancer when choosing an external IP address for the service. OpenShift Container Platform supports only a single IP address block for automatic assignment. This configuration requires less steps than manually assigning ExternalIPs to services, which requires managing the port space of a limited number of shared IP addresses. If you enable automatic assignment, a Service object with spec.type=LoadBalancer is allocated an external IP address. spec.externalIP.policy defines the permissible IP address blocks when manually specifying an IP address. OpenShift Container Platform does not apply policy rules to IP address blocks that you defined in the spec.externalIP.autoAssignCIDRs parameter. If routed correctly, external traffic from the configured external IP address block can reach service endpoints through any TCP or UDP port that the service exposes. Important As a cluster administrator, you must configure routing to externalIPs. You must also ensure that the IP address block you assign terminates at one or more nodes in your cluster. For more information, see Kubernetes External IPs . OpenShift Container Platform supports both the automatic and manual assignment of IP addresses, where each address is guaranteed to be assigned to a maximum of one service. This configuration ensures that each service can expose its chosen ports regardless of the ports exposed by other services. Note To use IP address blocks defined by autoAssignCIDRs in OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network. The following YAML describes a service with an external IP address configured: Example Service object with spec.externalIPs[] set apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253 # ... 27.2.5. Restrictions on the assignment of an external IP address As a cluster administrator, you can specify IP address blocks to allow and to reject IP addresses for a service. Restrictions apply only to users without cluster-admin privileges. A cluster administrator can always set the service spec.externalIPs[] field to any IP address. You configure an IP address policy by specifying Classless Inter-Domain Routing (CIDR) address blocks for the spec.ExternalIP.policy parameter in the policy object. Example in JSON form of a policy object and its CIDR parameters { "policy": { "allowedCIDRs": [], "rejectedCIDRs": [] } } When configuring policy restrictions, the following rules apply: If policy is set to {} , creating a Service object with spec.ExternalIPs[] results in a failed service. This setting is the default for OpenShift Container Platform. The same behavior exists for policy: null . If policy is set and either policy.allowedCIDRs[] or policy.rejectedCIDRs[] is set, the following rules apply: If allowedCIDRs[] and rejectedCIDRs[] are both set, rejectedCIDRs[] has precedence over allowedCIDRs[] . If allowedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] succeeds only if the specified IP addresses are allowed. If rejectedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] succeeds only if the specified IP addresses are not rejected. 27.2.6. Example policy objects The examples in this section show different spec.externalIP.policy configurations. In the following example, the policy prevents OpenShift Container Platform from creating any service with a specified external IP address. Example policy to reject any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {} # ... In the following example, both the allowedCIDRs and rejectedCIDRs fields are set. Example policy that includes both allowed and rejected CIDR blocks apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24 # ... In the following example, policy is set to {} . With this configuration, using the oc get networks.config.openshift.io -o yaml command to view the configuration means policy parameter does not show on the command output. The same behavior exists for policy: null . Example policy to allow any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {} # ... 27.2.7. ExternalIP address block configuration The configuration for ExternalIP address blocks is defined by a Network custom resource (CR) named cluster . The Network CR is part of the config.openshift.io API group. Important During cluster installation, the Cluster Version Operator (CVO) automatically creates a Network CR named cluster . Creating any other CR objects of this type is not supported. The following YAML describes the ExternalIP configuration: Network.config.openshift.io CR named cluster apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2 ... 1 Defines the IP address block in CIDR format that is available for automatic assignment of external IP addresses to a service. Only a single IP address range is allowed. 2 Defines restrictions on manual assignment of an IP address to a service. If no restrictions are defined, specifying the spec.externalIP field in a Service object is not allowed. By default, no restrictions are defined. The following YAML describes the fields for the policy stanza: Network.config.openshift.io policy stanza policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2 1 A list of allowed IP address ranges in CIDR format. 2 A list of rejected IP address ranges in CIDR format. Example external IP configurations Several possible configurations for external IP address pools are displayed in the following examples: The following YAML describes a configuration that enables automatically assigned external IP addresses: Example configuration with spec.externalIP.autoAssignCIDRs set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: autoAssignCIDRs: - 192.168.132.254/29 The following YAML configures policy rules for the allowed and rejected CIDR ranges: Example configuration with spec.externalIP.policy set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32 27.2.8. Configure external IP address blocks for your cluster As a cluster administrator, you can configure the following ExternalIP settings: An ExternalIP address block used by OpenShift Container Platform to automatically populate the spec.clusterIP field for a Service object. A policy object to restrict what IP addresses may be manually assigned to the spec.clusterIP array of a Service object. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Optional: To display the current external IP configuration, enter the following command: USD oc describe networks.config cluster To edit the configuration, enter the following command: USD oc edit networks.config cluster Modify the ExternalIP configuration, as in the following example: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: 1 ... 1 Specify the configuration for the externalIP stanza. To confirm the updated ExternalIP configuration, enter the following command: USD oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}' 27.2.9. steps Configuring ingress cluster traffic for a service external IP 27.3. Configuring ingress cluster traffic using an Ingress Controller OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller. 27.3.1. Using Ingress Controllers and routes The Ingress Operator manages Ingress Controllers and wildcard DNS. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI. Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes. The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators. By default, every Ingress Controller in the cluster can admit any route created in any project in the cluster. The Ingress Controller: Has two replicas by default, which means it should be running on two worker nodes. Can be scaled up to have more replicas on more nodes. Note The procedures in this section require prerequisites performed by the cluster administrator. 27.3.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: You have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 27.3.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 27.3.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as curl to check that the service is accessible from outside the cluster. To find the hostname of the route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None To check that the host responds to a GET request, enter the following command: Example curl command USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 27.3.5. Ingress sharding in OpenShift Container Platform In OpenShift Container Platform, an Ingress Controller can serve all routes, or it can serve a subset of routes. By default, the Ingress Controller serves any route created in any namespace in the cluster. You can add additional Ingress Controllers to your cluster to optimize routing by creating shards , which are subsets of routes based on selected characteristics. To mark a route as a member of a shard, use labels in the route or namespace metadata field. The Ingress Controller uses selectors , also known as a selection expression , to select a subset of routes from the entire pool of routes to serve. Ingress sharding is useful in cases where you want to load balance incoming traffic across multiple Ingress Controllers, when you want to isolate traffic to be routed to a specific Ingress Controller, or for a variety of other reasons described in the section. By default, each route uses the default domain of the cluster. However, routes can be configured to use the domain of the router instead. 27.3.6. Ingress Controller sharding You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered by using a given selection expression. As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller can be significant. As a cluster administrator, you can shard the routes to: Balance Ingress Controllers, or routers, with several routes to accelerate responses to changes. Assign certain routes to have different reliability guarantees than other routes. Allow certain Ingress Controllers to have different policies defined. Allow only specific routes to use additional features. Expose different routes on different addresses so that internal and external users can see different routes, for example. Transfer traffic from one version of an application to another during a blue-green deployment. When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. The status of a route describes whether an Ingress Controller has admitted the route. An Ingress Controller only admits a route if the route is unique to a shard. With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be nonoverlapping, also called traditional sharding, or overlapping, otherwise known as overlapped sharding. The following table outlines three sharding methods: Sharding method Description Namespace selector After you add a namespace selector to the Ingress Controller, all routes in a namespace that have matching labels for the namespace selector are included in the Ingress shard. Consider this method when an Ingress Controller serves all routes created in a namespace. Route selector After you add a route selector to the Ingress Controller, all routes with labels that match the route selector are included in the Ingress shard. Consider this method when you want an Ingress Controller to serve only a subset of routes or a specific route in a namespace. Namespace and route selectors Provides your Ingress Controller scope for both namespace selector and route selector methods. Consider this method when you want the flexibility of both the namespace selector and the route selector methods. 27.3.6.1. Traditional sharding example An example of a configured Ingress Controller finops-router that has the label selector spec.namespaceSelector.matchExpressions with key values set to finance and ops : Example YAML definition for finops-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops An example of a configured Ingress Controller dev-router that has the label selector spec.namespaceSelector.matchLabels.name with the key value set to dev : Example YAML definition for dev-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev If all application routes are in separate namespaces, such as each labeled with name:finance , name:ops , and name:dev , the configuration effectively distributes your routes between the two Ingress Controllers. OpenShift Container Platform routes for console, authentication, and other purposes should not be handled. In the scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards. Warning The default Ingress Controller continues to serve all routes unless the namespaceSelector or routeSelector fields contain routes that are meant for exclusion. See this Red Hat Knowledgebase solution and the section "Sharding the default Ingress Controller" for more information on how to exclude routes from the default Ingress Controller. 27.3.6.2. Overlapped sharding example An example of a configured Ingress Controller devops-router that has the label selector spec.namespaceSelector.matchExpressions with key values set to dev and ops : Example YAML definition for devops-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops The routes in the namespaces labeled name:dev and name:ops are now serviced by two different Ingress Controllers. With this configuration, you have overlapping subsets of routes. With overlapping subsets of routes you can create more complex routing rules. For example, you can divert higher priority traffic to the dedicated finops-router while sending lower priority traffic to devops-router . 27.3.6.3. Sharding the default Ingress Controller After creating a new Ingress shard, there might be routes that are admitted to your new Ingress shard that are also admitted by the default Ingress Controller. This is because the default Ingress Controller has no selectors and admits all routes by default. You can restrict an Ingress Controller from servicing routes with specific labels using either namespace selectors or route selectors. The following procedure restricts the default Ingress Controller from servicing your newly sharded finance , ops , and dev , routes using a namespace selector. This adds further isolation to Ingress shards. Important You must keep all of OpenShift Container Platform's administration routes on the same Ingress Controller. Therefore, avoid adding additional selectors to the default Ingress Controller that exclude these essential routes. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. Procedure Modify the default Ingress Controller by running the following command: USD oc edit ingresscontroller -n openshift-ingress-operator default Edit the Ingress Controller to contain a namespaceSelector that excludes the routes with any of the finance , ops , and dev labels: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev The default Ingress Controller will no longer serve the namespaces labeled name:finance , name:ops , and name:dev . 27.3.6.4. Ingress sharding and DNS The cluster administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router. Consider the following example: Router A lives on host 192.168.0.5 and has routes with *.foo.com . Router B lives on host 192.168.1.9 and has routes with *.example.com . Separate DNS entries must resolve *.foo.com to the node hosting Router A and *.example.com to the node hosting Router B: *.foo.com A IN 192.168.0.5 *.example.com A IN 192.168.1.9 27.3.6.5. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Figure 27.1. Ingress sharding using route labels Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 27.3.6.6. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Figure 27.2. Ingress sharding using namespace labels Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: USD cat router-internal.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: USD oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 27.3.6.7. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding: apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . Additional resources Baseline Ingress Controller (router) performance Ingress Operator in OpenShift Container Platform . Installing a cluster on bare metal . Installing a cluster on vSphere About network policy 27.4. Configuring the Ingress Controller endpoint publishing strategy The endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. Important On Red Hat OpenStack Platform (RHOSP), the LoadBalancerService endpoint publishing strategy is supported only if a cloud provider is configured to create health monitors. For RHOSP 16.2, this strategy is possible only if you use the Amphora Octavia provider. For more information, see the "Setting RHOSP Cloud Controller Manager options" section of the RHOSP installation documentation. 27.4.1. Ingress Controller endpoint publishing strategy NodePortService endpoint publishing strategy The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service. In this configuration, the Ingress Controller deployment uses container networking. A NodePortService is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService are preserved. Figure 27.3. Diagram of NodePortService The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy: All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes. When the client connects to a node that is down, for example, by connecting the 10.0.128.4 IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4 address is down and another IP address must be used instead. Note The Ingress Operator ignores any updates to .spec.ports[].nodePort fields of the service. By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. For more information, see the Kubernetes Services documentation on NodePort . HostNetwork endpoint publishing strategy The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed. An Ingress Controller with the HostNetwork endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports. The HostNetwork object has a hostNetwork field with the following default values for the optional binding ports: httpPort: 80 , httpsPort: 443 , and statsPort: 1936 . By specifying different binding ports for your network, you can deploy multiple Ingress Controllers on the same node for the HostNetwork strategy. Example apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936 27.4.1.1. Configuring the Ingress Controller endpoint publishing scope to Internal When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . Cluster administrators can change an External scoped Ingress Controller to Internal . Prerequisites You installed the oc CLI. Procedure To change an External scoped Ingress Controller to Internal , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as Internal . 27.4.1.2. Configuring the Ingress Controller endpoint publishing scope to External When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . The Ingress Controller's scope can be configured to be Internal during installation or after, and cluster administrators can change an Internal Ingress Controller to External . Important On some platforms, it is necessary to delete and recreate the service. Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OpenShift Container Platform to deprovision the existing service load balancer, provision a new one, and update DNS. Prerequisites You installed the oc CLI. Procedure To change an Internal scoped Ingress Controller to External , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as External . 27.4.1.3. Adding a single NodePort service to an Ingress Controller Instead of creating a NodePort -type Service for each project, you can create a custom Ingress Controller to use the NodePortService endpoint publishing strategy. To prevent port conflicts, consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a HostNetwork Ingress Controller. Before you set a NodePort -type Service for each project, read the following considerations: You must create a wildcard DNS record for the Nodeport Ingress Controller domain. A Nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements". You must expose a route for your service and specify the --hostname argument for your custom Ingress Controller domain. You must append the port that is assigned to the NodePort -type Service in the route so that you can access application pods. Prerequisites You installed the OpenShift CLI ( oc ). Logged in as a user with cluster-admin privileges. You created a wildcard DNS record. Procedure Create a custom resource (CR) file for the Ingress Controller: Example of a CR file that defines information for the IngressController object apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService # ... 1 Specify the a custom name for the IngressController CR. 2 The DNS name that the Ingress Controller services. As an example, the default ingresscontroller domain is apps.ipi-cluster.example.com , so you would specify the <custom_ic_domain_name> as nodeportsvc.ipi-cluster.example.com . 3 Specify the label for the nodes that include the custom Ingress Controller. 4 Specify the label for a set of namespaces. Substitute <key>:<value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. For example: ingresscontroller: custom-ic . Add a label to a node by using the oc label node command: USD oc label node <node_name> <key>=<value> 1 1 Where <value> must match the key-value pair specified in the nodePlacement section of your IngressController CR. Create the IngressController object: USD oc create -f <ingress_controller_cr>.yaml Find the port for the service created for the IngressController CR: USD oc get svc -n openshift-ingress Example output that shows port 80:32432/TCP for the router-nodeport-custom-ic3 service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m To create a new project, enter the following command: USD oc new-project <project_name> To label the new namespace, enter the following command: USD oc label namespace <project_name> <key>=<value> 1 1 Where <key>=<value> must match the value in the namespaceSelector section of your Ingress Controller CR. Create a new application in your cluster: USD oc new-app --image=<image_name> 1 1 An example of <image_name> is quay.io/openshifttest/hello-openshift:multiarch . Create a Route object for a service, so that the pod can use the service to expose the application external to the cluster. USD oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1 Note You must specify the domain name of your custom Ingress Controller in the --hostname argument. If you do not do this, the Ingress Operator uses the default Ingress Controller to serve all the routes for your cluster. Check that the route has the Admitted status and that it includes metadata for the custom Ingress Controller: USD oc get route/hello-openshift -o json | jq '.status.ingress' Example output # ... { "conditions": [ { "lastTransitionTime": "2024-05-17T18:25:41Z", "status": "True", "type": "Admitted" } ], [ { "host": "hello-openshift.nodeportsvc.ipi-cluster.example.com", "routerCanonicalHostname": "router-nodeportsvc.nodeportsvc.ipi-cluster.example.com", "routerName": "nodeportsvc", "wildcardPolicy": "None" } ], } Update the default IngressController CR to prevent the default Ingress Controller from managing the NodePort -type Service . The default Ingress Controller will continue to monitor all other cluster traffic. USD oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"namespaceSelector":{"matchExpressions":[{"key":"<key>","operator":"NotIn","values":["<value>]}]}}}' Verification Verify that the DNS entry can route inside and outside of your cluster by entering the following command. The command outputs the IP address of the node that received the label from running the oc label node command earlier in the procedure. USD dig +short <svc_name>-<project_name>.<custom_ic_domain_name> To verify that your cluster uses the IP addresses from external DNS servers for DNS resolution, check the connection of your cluster by entering the following command: USD curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1 1 1 Where <port> is the node port from the NodePort -type Service . Based on example output from the oc get svc -n openshift-ingress command, the 80:32432/TCP HTTP route means that 32432 is the node port. Output example Hello OpenShift! 27.4.2. Additional resources Ingress Controller configuration parameters Setting RHOSP Cloud Controller Manager options User-provisioned DNS requirements 27.5. Configuring ingress cluster traffic using a load balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer. 27.5.1. Using a load balancer to get traffic into the cluster If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster. A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing. Note If a pool is configured, it is done at the infrastructure level, not by a cluster administrator. Note The procedures in this section require prerequisites performed by the cluster administrator. 27.5.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 27.5.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 27.5.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as curl to check that the service is accessible from outside the cluster. To find the hostname of the route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None To check that the host responds to a GET request, enter the following command: Example curl command USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 27.5.5. Creating a load balancer service Use the following procedure to create a load balancer service. Prerequisites Make sure that the project and service you want to expose exist. Your cloud provider supports load balancers. Procedure To create a load balancer service: Log in to OpenShift Container Platform. Load the project where the service you want to expose is located. USD oc project project1 Open a text file on the control plane node and paste the following text, editing the file as needed: Sample load balancer configuration file 1 Enter a descriptive name for the load balancer service. 2 Enter the same port that the service you want to expose is listening on. 3 Enter a list of specific IP addresses to restrict traffic through the load balancer. This field is ignored if the cloud-provider does not support the feature. 4 Enter Loadbalancer as the type. 5 Enter the name of the service. Note To restrict the traffic through the load balancer to specific IP addresses, it is recommended to use the Ingress Controller field spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges . Do not set the loadBalancerSourceRanges field. Save and exit the file. Run the following command to create the service: USD oc create -f <file-name> For example: USD oc create -f mysql-lb.yaml Execute the following command to view the new service: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m The service has an external IP address automatically assigned if there is a cloud provider enabled. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address: USD curl <public-ip>:<port> For example: USD curl 172.29.121.74:3306 The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connecting with the service: If you have a MySQL client, log in with the standard CLI command: USD mysql -h 172.30.131.89 -u admin -p Example output Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]> 27.6. Configuring ingress cluster traffic on AWS OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses load balancers on AWS, specifically a Network Load Balancer (NLB) or a Classic Load Balancer (CLB). Both types of load balancers can forward the client's IP address to the node, but a CLB requires proxy protocol support, which OpenShift Container Platform automatically enables. There are two ways to configure an Ingress Controller to use an NLB: By force replacing the Ingress Controller that is currently using a CLB. This deletes the IngressController object and an outage will occur while the new DNS records propagate and the NLB is being provisioned. By editing an existing Ingress Controller that uses a CLB to use an NLB. This changes the load balancer without having to delete and recreate the IngressController object. Both methods can be used to switch from an NLB to a CLB. You can configure these load balancers on a new or existing AWS cluster. 27.6.1. Configuring Classic Load Balancer timeouts on AWS OpenShift Container Platform provides a method for setting a custom timeout period for a specific route or Ingress Controller. Additionally, an AWS Classic Load Balancer (CLB) has its own timeout period with a default time of 60 seconds. If the timeout period of the CLB is shorter than the route timeout or Ingress Controller timeout, the load balancer can prematurely terminate the connection. You can prevent this problem by increasing both the timeout period of the route and CLB. 27.6.1.1. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 27.6.1.2. Configuring Classic Load Balancer timeouts You can configure the default timeouts for a Classic Load Balancer (CLB) to extend idle connections. Prerequisites You must have a deployed Ingress Controller on a running cluster. Procedure Set an AWS connection idle timeout of five minutes for the default ingresscontroller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadBalancer": \ {"scope":"External", "providerParameters":{"type":"AWS", "aws": \ {"type":"Classic", "classicLoadBalancer": \ {"connectionIdleTimeout":"5m"}}}}}}}' Optional: Restore the default value of the timeout by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"providerParameters":{"aws":{"classicLoadBalancer": \ {"connectionIdleTimeout":null}}}}}}}' Note You must specify the scope field when you change the connection timeout value unless the current scope is already set. When you set the scope field, you do not need to do so again if you restore the default timeout value. 27.6.2. Configuring ingress cluster traffic on AWS using a Network Load Balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services that run in the cluster. One such method uses a Network Load Balancer (NLB). You can configure an NLB on a new or existing AWS cluster. 27.6.2.1. Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer You can switch the Ingress Controller that is using a Classic Load Balancer (CLB) to one that uses a Network Load Balancer (NLB) on AWS. Switching between these load balancers will not delete the IngressController object. Warning This procedure might cause the following issues: An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Leaked load balancer resources due to a change in the annotation of the service. Procedure Modify the existing Ingress Controller that you want to switch to using an NLB. This example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Note If you do not specify a value for the spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type field, the Ingress Controller uses the spec.loadBalancer.platform.aws.type value from the cluster Ingress configuration that was set during installation. Tip If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. Apply the changes to the Ingress Controller YAML file by running the command: USD oc apply -f ingresscontroller.yaml Expect several minutes of outages while the Ingress Controller updates. 27.6.2.2. Switching the Ingress Controller from using a Network Load Balancer to a Classic Load Balancer You can switch the Ingress Controller that is using a Network Load Balancer (NLB) to one that uses a Classic Load Balancer (CLB) on AWS. Switching between these load balancers will not delete the IngressController object. Warning This procedure might cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Procedure Modify the existing Ingress Controller that you want to switch to using a CLB. This example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService Note If you do not specify a value for the spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type field, the Ingress Controller uses the spec.loadBalancer.platform.aws.type value from the cluster Ingress configuration that was set during installation. Tip If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. Apply the changes to the Ingress Controller YAML file by running the command: USD oc apply -f ingresscontroller.yaml Expect several minutes of outages while the Ingress Controller updates. 27.6.2.3. Replacing Ingress Controller Classic Load Balancer with Network Load Balancer You can replace an Ingress Controller that is using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB) on AWS. Warning This procedure might cause the following issues: An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Leaked load balancer resources due to a change in the annotation of the service. Procedure Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService If your default Ingress Controller has other customizations, ensure that you modify the file accordingly. Tip If your Ingress Controller has no other customizations and you are only updating the load balancer type, consider following the procedure detailed in "Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer". Force replace the Ingress Controller YAML file: USD oc replace --force --wait -f ingresscontroller.yml Wait until the Ingress Controller is replaced. Expect several of minutes of outages. 27.6.2.4. Configuring an Ingress Controller Network Load Balancer on an existing AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on an existing cluster. Prerequisites You must have an installed AWS cluster. PlatformStatus of the infrastructure resource must be AWS. To verify that the PlatformStatus is AWS, run: USD oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS Procedure Create an Ingress Controller backed by an AWS NLB on an existing cluster. Create the Ingress Controller manifest: USD cat ingresscontroller-aws-nlb.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB 1 Replace USDmy_ingress_controller with a unique name for the Ingress Controller. 2 Replace USDmy_unique_ingress_domain with a domain name that is unique among all Ingress Controllers in the cluster. This variable must be a subdomain of the DNS name <clustername>.<domain> . 3 You can replace External with Internal to use an internal NLB. Create the resource in the cluster: USD oc create -f ingresscontroller-aws-nlb.yaml Important Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the Creating the installation configuration file procedure. 27.6.2.5. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 27.6.3. Additional resources Installing a cluster on AWS with network customizations . For more information on support for NLBs, see Network Load Balancer support on AWS . For more information on proxy protocol support for CLBs, see Configure proxy protocol support for your Classic Load Balancer 27.7. Configuring ingress cluster traffic for a service external IP You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service so that the service is available to traffic outside your OpenShift Container Platform cluster. Hosting an external IP address in this way is only applicable for a cluster installed on bare-metal hardware. You must ensure that you correctly configure the external network infrastructure to route traffic to the service. 27.7.1. Prerequisites Your cluster is configured with ExternalIPs enabled. For more information, read Configuring ExternalIPs for services . Note Do not use the same ExternalIP for the egress IP. 27.7.2. Attaching an ExternalIP to a service You can attach an ExternalIP resource to a service. If you configured your cluster to automatically attach the resource to a service, you might not need to manually attach an ExternalIP to the service. The examples in the procedure use a scenario that manually attaches an ExternalIP resource to a service in a cluster with an IP failover configuration. Procedure Confirm compatible IP address ranges for the ExternalIP resource by entering the following command in your CLI: USD oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}' Note If autoAssignCIDRs is set and you did not specify a value for spec.externalIPs in the ExternalIP resource, OpenShift Container Platform automatically assigns ExternalIP to a new Service object. Choose one of the following options to attach an ExternalIP resource to the service: If you are creating a new service, specify a value in the spec.externalIPs field and array of one or more valid IP addresses in the allowedCIDRs parameter. Example of service YAML configuration file that supports an ExternalIP resource apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28 If you are attaching an ExternalIP to an existing service, enter the following command. Replace <name> with the service name. Replace <ip_address> with a valid ExternalIP address. You can provide multiple IP addresses separated by commas. USD oc patch svc <name> -p \ '{ "spec": { "externalIPs": [ "<ip_address>" ] } }' For example: USD oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}' Example output "mysql-55-rhel7" patched To confirm that an ExternalIP address is attached to the service, enter the following command. If you specified an ExternalIP for a new service, you must create the service first. USD oc get svc Example output NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m 27.7.3. Additional resources About MetalLB and the MetalLB Operator Configuring IP failover Configuring ExternalIPs for services 27.8. Configuring ingress cluster traffic by using a NodePort OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort . 27.8.1. Using a NodePort to get traffic into the cluster Use a NodePort -type Service resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service resource's .spec.ports[*].nodePort field. Important Using a node port requires additional port resources. A NodePort exposes the service on a static port on the node's IP address. NodePort s are in the 30000 to 32767 range by default, which means a NodePort is unlikely to match a service's intended port. For example, port 8080 may be exposed as port 31020 on the node. The administrator must ensure the external IP addresses are routed to the nodes. NodePort s and external IPs are independent and both can be used concurrently. Note The procedures in this section require prerequisites performed by the cluster administrator. 27.8.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 27.8.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 27.8.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> To expose a node port for the application, modify the custom resource definition (CRD) of a service by entering the following command: USD oc edit svc <service_name> Example output spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2 1 Optional: Specify the node port range for the application. By default, OpenShift Container Platform selects an available port in the 30000-32767 range. 2 Define the service type. Optional: To confirm the service is available with a node port exposed, enter the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s Optional: To remove the service created automatically by the oc new-app command, enter the following command: USD oc delete svc nodejs-ex Verification To check that the service node port is updated with a port in the 30000-32767 range, enter the following command: USD oc get svc In the following example output, the updated port is 30327 : Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s 27.8.5. Additional resources Configuring the node port service range Adding a single NodePort service to an Ingress Controller 27.9. Configuring ingress cluster traffic using load balancer allowed source ranges You can specify a list of IP address ranges for the IngressController . This restricts access to the load balancer service when the endpointPublishingStrategy is LoadBalancerService . 27.9.1. Configuring load balancer allowed source ranges You can enable and configure the spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges field. By configuring load balancer allowed source ranges, you can limit the access to the load balancer for the Ingress Controller to a specified list of IP address ranges. The Ingress Operator reconciles the load balancer Service and sets the spec.loadBalancerSourceRanges field based on AllowedSourceRanges . Note If you have already set the spec.loadBalancerSourceRanges field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges in a version of OpenShift Container Platform, Ingress Controller starts reporting Progressing=True after an upgrade. To fix this, set AllowedSourceRanges that overwrites the spec.loadBalancerSourceRanges field and clears the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Ingress Controller starts reporting Progressing=False again. Prerequisites You have a deployed Ingress Controller on a running cluster. Procedure Set the allowed source ranges API for the Ingress Controller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadbalancer": \ {"scope":"External", "allowedSourceRanges":["0.0.0.0/0"]}}}}' 1 1 The example value 0.0.0.0/0 specifies the allowed source range. 27.9.2. Migrating to load balancer allowed source ranges If you have already set the annotation service.beta.kubernetes.io/load-balancer-source-ranges , you can migrate to load balancer allowed source ranges. When you set the AllowedSourceRanges , the Ingress Controller sets the spec.loadBalancerSourceRanges field based on the AllowedSourceRanges value and unsets the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Note If you have already set the spec.loadBalancerSourceRanges field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges in a version of OpenShift Container Platform, the Ingress Controller starts reporting Progressing=True after an upgrade. To fix this, set AllowedSourceRanges that overwrites the spec.loadBalancerSourceRanges field and clears the service.beta.kubernetes.io/load-balancer-source-ranges annotation. The Ingress Controller starts reporting Progressing=False again. Prerequisites You have set the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Procedure Ensure that the service.beta.kubernetes.io/load-balancer-source-ranges is set: USD oc get svc router-default -n openshift-ingress -o yaml Example output apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32 Ensure that the spec.loadBalancerSourceRanges field is unset: USD oc get svc router-default -n openshift-ingress -o yaml Example output ... spec: loadBalancerSourceRanges: - 0.0.0.0/0 ... Update your cluster to OpenShift Container Platform 4.14. Set the allowed source ranges API for the ingresscontroller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"allowedSourceRanges":["0.0.0.0/0"]}}}}' 1 1 The example value 0.0.0.0/0 specifies the allowed source range. 27.9.3. Additional resources Introduction to OpenShift updates 27.10. Patching existing ingress objects You can update or modify the following fields of existing Ingress objects without recreating the objects or disrupting services to them: Specifications Host Path Backend services SSL/TLS settings Annotations 27.10.1. Patching Ingress objects to resolve an ingressWithoutClassName alert The ingressClassName field specifies the name of the IngressClass object. You must define the ingressClassName field for each Ingress object. If you have not defined the ingressClassName field for an Ingress object, you could experience routing issues. After 24 hours, you will receive an ingressWithoutClassName alert to remind you to set the ingressClassName field. Procedure Patch the Ingress objects with a completed ingressClassName field to ensure proper routing and functionality. List all IngressClass objects: USD oc get ingressclass List all Ingress objects in all namespaces: USD oc get ingress -A Patch the Ingress object: USD oc patch ingress/<ingress_name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}' Replace <ingress_name> with the name of the Ingress object. This command patches the Ingress object to include the desired ingress class name.
[ "apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253", "{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {}", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2", "policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32", "oc describe networks.config cluster", "oc edit networks.config cluster", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1", "oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project <project_name>", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n <project_name>", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project <project_name>", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops", "oc edit ingresscontroller -n openshift-ingress-operator default", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded", "oc apply -f router-internal.yaml", "oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net", "cat router-internal.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded", "oc apply -f router-internal.yaml", "oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net", "oc new-project hello-openshift", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json", "oc expose pod/hello-openshift", "apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift", "oc -n hello-openshift create -f hello-openshift-route.yaml", "oc -n hello-openshift get routes/hello-openshift-edge -o yaml", "apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'", "oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml", "oc -n openshift-ingress delete services/router-default", "oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"External\"}}}}'", "oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml", "oc -n openshift-ingress delete services/router-default", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService", "oc label node <node_name> <key>=<value> 1", "oc create -f <ingress_controller_cr>.yaml", "oc get svc -n openshift-ingress", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m", "oc new-project <project_name>", "oc label namespace <project_name> <key>=<value> 1", "oc new-app --image=<image_name> 1", "oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1", "oc get route/hello-openshift -o json | jq '.status.ingress'", "{ \"conditions\": [ { \"lastTransitionTime\": \"2024-05-17T18:25:41Z\", \"status\": \"True\", \"type\": \"Admitted\" } ], [ { \"host\": \"hello-openshift.nodeportsvc.ipi-cluster.example.com\", \"routerCanonicalHostname\": \"router-nodeportsvc.nodeportsvc.ipi-cluster.example.com\", \"routerName\": \"nodeportsvc\", \"wildcardPolicy\": \"None\" } ], }", "oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"namespaceSelector\":{\"matchExpressions\":[{\"key\":\"<key>\",\"operator\":\"NotIn\",\"values\":[\"<value>]}]}}}'", "dig +short <svc_name>-<project_name>.<custom_ic_domain_name>", "curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1", "Hello OpenShift!", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project <project_name>", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n <project_name>", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project <project_name>", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "oc project project1", "apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5", "oc create -f <file-name>", "oc create -f mysql-lb.yaml", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m", "curl <public-ip>:<port>", "curl 172.29.121.74:3306", "mysql -h 172.30.131.89 -u admin -p", "Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>", "oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1", "oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadBalancer\": {\"scope\":\"External\", \"providerParameters\":{\"type\":\"AWS\", \"aws\": {\"type\":\"Classic\", \"classicLoadBalancer\": {\"connectionIdleTimeout\":\"5m\"}}}}}}}'", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"providerParameters\":{\"aws\":{\"classicLoadBalancer\": {\"connectionIdleTimeout\":null}}}}}}}'", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc apply -f ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService", "oc apply -f ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc replace --force --wait -f ingresscontroller.yml", "oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS", "cat ingresscontroller-aws-nlb.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB", "oc create -f ingresscontroller-aws-nlb.yaml", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'", "apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28", "oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'", "oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'", "\"mysql-55-rhel7\" patched", "oc get svc", "NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m", "oc adm policy add-cluster-role-to-user cluster-admin <user_name>", "oc new-project <project_name>", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n <project_name>", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project <project_name>", "oc edit svc <service_name>", "spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s", "oc delete svc nodejs-ex", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadbalancer\": {\"scope\":\"External\", \"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1", "oc get svc router-default -n openshift-ingress -o yaml", "apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32", "oc get svc router-default -n openshift-ingress -o yaml", "spec: loadBalancerSourceRanges: - 0.0.0.0/0", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1", "oc get ingressclass", "oc get ingress -A", "oc patch ingress/<ingress_name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/configuring-ingress-cluster-traffic
Preface
Preface To get started with Fuse, you need to download and install the files for your Spring Boot container. The information and instructions here guide you in installing, developing, and building your first Fuse application. Chapter 1, Getting started with Fuse on Spring Boot Chapter 2, Setting up Maven locally
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_spring_boot/pr01
Chapter 3. Understanding Windows container workloads
Chapter 3. Understanding Windows container workloads Red Hat OpenShift support for Windows Containers provides built-in support for running Microsoft Windows Server containers on OpenShift Container Platform. For those that administer heterogeneous environments with a mix of Linux and Windows workloads, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). Note Multi-tenancy for clusters that have Windows nodes is not supported. Hostile multi-tenant usage introduces security concerns in all Kubernetes environments. Additional security features like pod security policies , or more fine-grained role-based access control (RBAC) for nodes, make exploits more difficult. However, if you choose to run hostile multi-tenant workloads, a hypervisor is the only security option you should use. The security domain for Kubernetes encompasses the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters. Windows Server Containers provide resource isolation using a shared kernel but are not intended to be used in hostile multitenancy scenarios. Scenarios that involve hostile multitenancy should use Hyper-V Isolated Containers to strongly isolate tenants. 3.1. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. 3.1.1. WMCO 6 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 6.0.0 and WMCO 6.0.1, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 3.1.2. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Be aware that OpenShift SDN networking is the default network for OpenShift Container Platform clusters. However, OpenShift SDN is not supported by WMCO. Table 3.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Bare metal or provider agnostic Hybrid networking with OVN-Kubernetes Table 3.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 Custom VXLAN port Windows Server 2022, OS Build 20348.681 or later Additional resources See Configuring hybrid networking with OVN-Kubernetes 3.2. Windows workload management To run Windows workloads in your cluster, you must first install the Windows Machine Config Operator (WMCO). The WMCO is a Linux-based Operator that runs on Linux-based control plane and compute nodes. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Figure 3.1. WMCO design Before deploying Windows workloads, you must create a Windows compute node and have it join the cluster. The Windows node hosts the Windows workloads in a cluster, and can run alongside other Linux-based compute nodes. You can create a Windows compute node by creating a Windows machine set to host Windows Server compute machines. You must apply a Windows-specific label to the machine set that specifies a Windows OS image. The WMCO watches for machines with the Windows label. After a Windows machine set is detected and its respective machines are provisioned, the WMCO configures the underlying Windows virtual machine (VM) so that it can join the cluster as a compute node. Figure 3.2. Mixed Windows and Linux workloads The WMCO expects a predetermined secret in its namespace containing a private key that is used to interact with the Windows instance. WMCO checks for this secret during boot up time and creates a user data secret which you must reference in the Windows MachineSet object that you created. Then the WMCO populates the user data secret with a public key that corresponds to the private key. With this data in place, the cluster can connect to the Windows VM using an SSH connection. After the cluster establishes a connection with the Windows VM, you can manage the Windows node using similar practices as you would a Linux-based node. Note The OpenShift Container Platform web console provides most of the same monitoring capabilities for Windows nodes that are available for Linux nodes. However, the ability to monitor workload graphs for pods running on Windows nodes is not available at this time. Scheduling Windows workloads to a Windows node can be done with typical pod scheduling practices like taints, tolerations, and node selectors; alternatively, you can differentiate your Windows workloads from Linux workloads and other Windows-versioned workloads by using a RuntimeClass object. 3.3. Windows node services The following Windows-specific services are installed on each Windows node: Service Description kubelet Registers the Windows node and manages its status. Container Network Interface (CNI) plugins Exposes networking for Windows nodes. Windows Machine Config Bootstrapper (WMCB) Configures the kubelet and CNI plugins. Windows Exporter Exports Prometheus metrics from Windows nodes Kubernetes Cloud Controller Manager (CCM) Interacts with the underlying Azure cloud platform. hybrid-overlay Creates the OpenShift Container Platform Host Network Service (HNS) . kube-proxy Maintains network rules on nodes allowing outside communication. containerd container runtime Manages the complete container lifecycle. 3.4. Known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat cost management Red Hat OpenShift Local Windows nodes do not support pulling container images from private registries. You can use images from public registries or pre-pull the images. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/windows_container_support_for_openshift/understanding-windows-container-workloads
7.4. Using Docker on Red Hat Enterprise Linux 7
7.4. Using Docker on Red Hat Enterprise Linux 7 Docker and Docker Registry have been released as part of the Extras channel in Red Hat Enterprise Linux. Once the Extras channel has been enabled, the packages can be installed in the usual way. For more information on installing packages or enabling channels, see System Administrator's Guide . Red Hat provides a registry of certified docker formatted container images. This registry provides pre-built solutions usable on Red Hat Enterprise Linux 7 with the Docker service. To download container images from the Red Hat Atomic Registry, see the Red Hat Atomic Container Images search page . Downloading images is done using the docker pull command. Note that you will need to have the docker service running to be able to use this command.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-linux_containers_with_docker_format-using_docker
Chapter 1. Introduction
Chapter 1. Introduction The Red Hat OpenStack Platform director provides a set of tools to provision and create a fully featured OpenStack environment, also known as the Overcloud. The Director Installation and Usage Guide covers the preparation and configuration of the Overcloud. However, a proper production-level Overcloud might require additional configuration, including: Basic network configuration to integrate the Overcloud into your existing network infrastructure. Network traffic isolation on separate VLANs for certain OpenStack network traffic types. SSL configuration to secure communication on public endpoints Storage options such as NFS, iSCSI, Red Hat Ceph Storage, and multiple third-party storage devices. Registration of nodes to the Red Hat Content Delivery Network or your internal Red Hat Satellite 5 or 6 server. Various system-level options. Various OpenStack service options. This guide provides instructions for augmenting your Overcloud through the director. At this point, the director has registered the nodes and configured the necessary services for Overcloud creation. Now you can customize your Overcloud using the methods in this guide. Note The examples in this guide are optional steps for configuring the Overcloud. These steps are only required to provide the Overcloud with additional functionality. Use the steps that apply to the needs of your environment.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/chap-introduction
Chapter 8. Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment
Chapter 8. Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment Red Hat Ceph Storage Dashboard is disabled by default but you can now enable it in your overcloud with the Red Hat OpenStack Platform director. The Ceph Dashboard is a built-in, web-based Ceph management and monitoring application to administer various aspects and objects in your cluster. Red Hat Ceph Storage Dashboard comprises the Ceph Dashboard manager module, which provides the user interface and embeds Grafana, the front end of the platform, Prometheus as a monitoring plugin, Alertmanager and Node Exporters that are deployed throughout the cluster and send alerts and export cluster data to the Dashboard. Note This feature is supported with Ceph Storage 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Note The Red Hat Ceph Storage Dashboard is always colocated on the same nodes as the other Ceph manager components. Note If you want to add Ceph Dashboard during your initial overcloud deployment, complete the procedures in this chapter before you deploy your initial overcloud in Section 7.2, "Initiating overcloud deployment" . The following diagram shows the architecture of Ceph Dashboard on Red Hat OpenStack Platform: For more information about the Dashboard and its features and limitations, see Dashboard features in the Red Hat Ceph Storage Dashboard Guide . TLS everywhere with Ceph Dashboard The dashboard front end is fully integrated with the TLS everywhere framework. You can enable TLS everywhere provided that you have the required environment files and they are included in the overcloud deploy command. This triggers the certificate request for both Grafana and the Ceph Dashboard and the generated certificate and key files are passed to ceph-ansible during the overcloud deployment. For instructions and more information about how to enable TLS for the Dashboard as well as for other openstack services, see the following locations in the Advanced Overcloud Customization guide: Enabling SSL/TLS on Overcloud Public Endpoints . Enabling SSL/TLS on Internal and Public Endpoints with Identity Management . Note The port to reach the Ceph Dashboard remains the same even in the TLS-everywhere context. 8.1. Including the necessary containers for the Ceph Dashboard Before you can add the Ceph Dashboard templates to your overcloud, you must include the necessary containers by using the containers-prepare-parameter.yaml file. To generate the containers-prepare-parameter.yaml file to prepare your container images, complete the following steps: Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: Edit the containers-prepare-parameter.yaml file and make the modifications to suit your requirements. The following example containers-prepare-parameter.yaml file contains the image locations and tags related to the Dashboard services including Grafana, Prometheus, Alertmanager, and Node Exporter. Edit the values depending on your specific scenario: For more information about registry and image configuration with the containers-prepare-parameter.yaml file, see Container image preparation parameters in the Transitioning to Containerized Services guide. 8.2. Deploying Ceph Dashboard Note The Ceph Dashboard admin user role is set to read-only mode by default. To change the Ceph Dashboard admin default mode, see Section 8.3, "Changing the default permissions" . Procedure Log in to the undercloud node as the stack user. Include the following environment files, with all environment files that are part of your existing deployment, in the openstack overcloud deploy command: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. Result The resulting deployment comprises an external stack with the grafana, prometheus, alertmanager, and node-exporter containers. The Ceph Dashboard manager module is the back end for this stack and embeds the grafana layouts to provide ceph cluster specific metrics to the end users. 8.3. Changing the default permissions The Ceph Dashboard admin user role is set to read-only mode by default for safe monitoring of the Ceph cluster. To permit an admin user to have elevated privileges so that they can alter elements of the Ceph cluster with the Dashboard, you can use the CephDashboardAdminRO parameter to change the default admin permissions. Warning A user with full permissions might alter elements of your cluster that director configures. This can cause a conflict with director-configured options when you run a stack update. To avoid this problem, do not alter director-configured options with Ceph Dashboard, for example, Ceph OSP pools attributes. Procedure Log in to the undercloud as the stack user. Create the following ceph_dashboard_admin.yaml environment file: Run the overcloud deploy command to update the existing stack and include the environment file you created with all other environment files that are part of your existing deployment: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. 8.4. Accessing Ceph Dashboard To test that Ceph Dashboard is running correctly, complete the following verification steps to access it and check that the data it displays from the Ceph cluster is correct. Procedure Log in to the undercloud node as the stack user. Retrieve the dashboard admin login credentials: Retrieve the VIP address to access the Ceph Dashboard: Use a web browser to point to the front end VIP and access the Dashboard. Director configures and exposes the Dashboard on the provisioning network, so you can use the VIP that you retrieved in step 2 to access the dashboard directly on TCP port 8444. Ensure that the following conditions are met: The Web client host is layer 2 connected to the provisioning network. The provisioning network is properly routed or proxied, and it can be reached from the web client host. If these conditions are not met, you can still open a SSH tunnel to reach the Dashboard VIP on the overcloud: Replace <dashboard vip> with the IP address of the control plane VIP that you retrieved in step 3. Access the Dashboard by pointing your web browser to http://localhost:8444 . The default user that ceph-ansible creates is admin. You can retrieve the password in /var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml . Results You can access the Ceph Dashboard. The numbers and graphs that the Dashboard displays reflect the same cluster status that the CLI command, ceph -s , returns. For more information about the Red Hat Ceph Storage Dashboard, see the Red Hat Ceph Storage Administration Guide
[ "openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml", "parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.1 ceph_grafana_image: rhceph-3-dashboard-rhel7 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: 3 ceph_image: rhceph-4-rhel8 ceph_namespace: registry.redhat.io/rhceph ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.1 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.1 ceph_tag: latest", "openstack overcloud deploy --templates -e <existing_overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-dashboard.yaml", "parameter_defaults: CephDashboardAdminRO: false", "openstack overcloud deploy --templates -e <existing_overcloud_environment_files> -e ceph_dashboard_admin.yml", "[stack@undercloud ~]USD grep dashboard_admin_password /var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml", "[stack@undercloud-0 ~]USD grep dashboard_frontend /var/lib/mistral/overcloud/ceph-ansible/group_vars/mgrs.yml", "client_hostUSD ssh -L 8444:<dashboard vip>:8444 stack@<your undercloud>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/adding-ceph-dashboard
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery Disaster recovery (DR) is the ability to recover and continue business critical applications from natural or human created disasters. It is a component of the overall business continuance strategy of any major organization as designed to preserve the continuity of business operations during major adverse events. The OpenShift Data Foundation DR capability enables DR across multiple Red Hat OpenShift Container Platform clusters, and is categorized as follows: Metro-DR Metro-DR ensures business continuity during the unavailability of a data center with no data loss. In the public cloud these would be similar to protecting from an Availability Zone failure. Regional-DR Regional-DR ensures business continuity during the unavailability of a geographical region, accepting some loss of data in a predictable amount. In the public cloud this would be similar to protecting from a region failure. Disaster Recovery with stretch cluster Stretch cluster solution ensures business continuity with no-data loss disaster recovery protection with OpenShift Data Foundation based synchronous replication in a single OpenShift cluster, stretched across two data centers with low latency and one arbiter node. Zone failure in Metro-DR and region failure in Regional-DR is usually expressed using the terms, Recovery Point Objective (RPO) and Recovery Time Objective (RTO) . RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage. RTO is the amount of downtime a business can tolerate. The RTO answers the question, "How long can it take for our system to recover after we are notified of a business disruption?" The intent of this guide is to detail the Disaster Recovery steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then relocate the same application to the original primary cluster.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-odf-dr-solutions_common
Chapter 7. Creating the data plane for dynamic routing
Chapter 7. Creating the data plane for dynamic routing The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles. You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR: Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane. Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process. Note You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR. Important Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661 . To create and deploy a data plane, you must perform the following tasks: Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs. The following procedures create simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. Use these procedures to set up an initial environment that you can test, before adding the customizations that your production environment requires. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. 7.1. Prerequisites A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane . You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges. 7.2. Creating the data plane secrets The data plane requires several Secret custom resources (CRs) to operate. The Secret CRs are used by the data plane nodes for the following functionality: To enable secure access between nodes: You must generate an SSH key and create an SSH key Secret CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each node set in your data plane. You must generate an SSH key and create an SSH key Secret CR for each key to enable migration of instances between Compute nodes. To register the operating system of the nodes that are not registered to the Red Hat Customer Portal. To enable repositories for the nodes. To provide access to libvirt. Prerequisites Pre-provisioned nodes are configured with an SSH public key in the USDHOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For information, see Configuring reserved user and group IDs in the RHEL Configuring basic system settings guide. Procedure For unprovisioned nodes, create the SSH key pair for Ansible: Replace <key_file_name> with the name to use for the key pair. Create the Secret CR for Ansible and apply it to the cluster: Replace <key_file_name> with the name and location of your SSH key pair file. Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane. Create the SSH key pair for instance migration: Create the Secret CR for migration and apply it to the cluster: Create a file on your workstation named secret_subscription.yaml that contains the subscription-manager credentials for registering the operating system of the nodes that are not registered to the Red Hat Customer Portal: Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string: Tip If you don't want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password. Create the Secret CR: Create a Secret CR that contains the Red Hat registry credentials: Replace <username> and <password> with your Red Hat registry username and password credentials. For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts . Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret: Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password: Tip If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password. Create the Secret CR: Verify that the Secret CRs are created: 7.3. Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes for dynamic routing To configure the data plane for dynamic routing in your Red Hat OpenStack Services on OpenShift (RHOSO) environment with pre-provisioned nodes, create an OpenStackDataPlaneNodeSet CR for Compute nodes and an OpenStackDataPlaneNodeSet CR for Networker nodes. The Networker nodes contain the OVN gateway chassis. 7.3.1. Creating an OpenStackDataPlaneNodeSet CR for Compute nodes using pre-provisioned nodes Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of pre-provisioned nodes in your data plane that are Compute nodes. You can define as many Compute node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1 . If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. Important Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661 . You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate . Tip For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes . Procedure Create a file on your workstation named openstack_compute_node_set.yaml to define the OpenStackDataPlaneNodeSet CR: 1 The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set. 2 Optional: A list of environment variables to pass to the pod. Connect the Compute nodes on the data plane to the control plane network: Specify that the nodes in this set are pre-provisioned: Add the SSH key secret that you created to enable Ansible to connect to the Compute nodes on the data plane: Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets , for example, dataplane-ansible-ssh-private-key-secret . Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce . Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . Enable persistent logging for the data plane nodes: Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster. Specify the management network: Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts . 1 The user associated with the secret you created in Creating the data plane secrets . 2 The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/ . For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273 . For information about how to log into registry.redhat.io , see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6 . Add the network configuration template to apply to your Compute nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes: 1 Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Compute node. For alternative templates, see roles/edpm_network_config/templates . For more information about data plane network configuration, see Customizing data plane networks in Configuring networking services . Add the common configuration for the set of Compute nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration: Example edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}' For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties for dynamic routing . Define each node in this node set: 1 The node definition reference, for example, edpm-compute-0 . Each node in the node set must have a node definition. 2 Defines the IPAM and the DNS records for the node. 3 Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR. 4 Node-specific Ansible variables that customize the node. Note Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section. You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many ansibleVars include edpm in the name, which stands for "External Data Plane Management". For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties . In the services section, ensure that the frr and ovn-bgp-agent services are included: Example Save the openstack_compute_node_set.yaml definition file. Create the data plane resources: Verify that the data plane resources have been created by confirming that the status is SetupReady : When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error. For information about the data plane conditions and states, see Data plane conditions and states . Verify that the Secret resource was created for the node set: Verify the services were created: 7.3.2. Creating an OpenStackDataPlaneNodeSet CR for Networker nodes using pre-provisioned nodes Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many Networker node sets as necessary for your deployment. Important Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661 . You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate . Tip For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes . Procedure Create a file on your workstation named openstack_networker_node_set.yaml to define the OpenStackDataPlaneNodeSet CR: 1 The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set. 2 Optional: A list of environment variables to pass to the pod. Connect the Networker nodes on the data plane to the control plane network: Specify that the nodes in this set are pre-provisioned: Add the SSH key secret that you created to enable Ansible to connect to the Networker nodes on the data plane: Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets , for example, dataplane-ansible-ssh-private-key-secret . Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce . Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . Enable persistent logging for the data plane nodes: Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster. Specify the management network: Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts . 1 The user associated with the secret you created in Creating the data plane secrets . 2 The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/ . For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273 . For information about how to log into registry.redhat.io , see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6 . Add the network configuration template to apply to your Networker nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes: 1 Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Compute node. For alternative templates, see roles/edpm_network_config/templates . For more information about data plane network configuration, see Customizing data plane networks in the Configuring network services guide. Add the common configuration for the set of Networker nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration: Example edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}' For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties for dynamic routing . Define each node in this node set: 1 The node definition reference, for example, edpm-networker-0 . Each node in the node set must have a node definition. 2 Defines the IPAM and the DNS records for the node. 3 Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR. 4 Node-specific Ansible variables that customize the node. Note Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section. You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many ansibleVars include edpm in the name, which stands for "External Data Plane Management". For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties . In the services section, ensure that the frr and ovn-bgp-agent services are included. Note Do not include the ssh-known-hosts service in this node set because it has already been included in the Compute node set CR. This service is included in only one node set CR because it is a global service. Example Save the openstack_networker_node_set.yaml definition file. Create the Networker node resources for the data plane: Verify that the data plane resources have been created by confirming that the status is SetupReady : When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error. For information about the data plane conditions and states, see Data plane conditions and states . Verify that the Secret resource was created for the node set: Verify the services were created: 7.3.3. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes for dynamic routing The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. 7.4. Creating a data plane with unprovisioned nodes for dynamic routing Configuring the data plane for dynamic routing in your Red Hat OpenStack Services on OpenShift (RHOSO) environment using unprovisioned nodes, consists of: Creating a BareMetalHost custom resource (CR) for each bare-metal data plane node. Defining an OpenStackDataPlaneNodeSet CR for Compute nodes and an OpenStackDataPlaneNodeSet CR for Networker nodes. The Networker nodes contain the OVN gateway chassis. For more information about provisioning bare-metal nodes, see Planning provisioning for bare-metal data plane nodes in Planning your deployment . Prerequisites Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment . To provision data plane nodes with PXE network boot, a bare-metal provisioning network must be available in your Red Hat OpenShift Container Platform (RHOCP) cluster. Note You do not need a provisioning network to provision nodes with virtual media. A Provisioning CR is available in RHOCP. For more information about creating a Provisioning CR, see Configuring a provisioning resource to scale user-provisioned clusters in the Red Hat OpenShift Container Platform (RHOCP) Installing on bare metal guide. 7.4.1. Creating the BareMetalHost CRs for unprovisioned nodes You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration. Note If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane address range. This ensures that the return traffic remains on the machine network interface. Procedure The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces: If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the Provisioning CR to enable virtualMediaViaExternalNetwork , which enables bare-metal connectivity through the external network: Create a file on your workstation that defines the Secret CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set: Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string: Tip If you don't want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password. Create a file named bmh_nodes.yaml on your workstation, that defines the BareMetalHost CR for each bare-metal data plane node. The following example creates a BareMetalHost CR with the provisioning method Redfish virtual media: 1 The URL for communicating with the node's BMC controller. For information on BMC addressing for other boot methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide. 2 The name of the Secret CR you created in the step for accessing the BMC of the node. For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide. Create the BareMetalHost resources: Verify that the BareMetalHost resources have been created and are in the Available state: 7.4.2. Creating an OpenStackDataPlaneNodeSet CR for Compute nodes using unprovisioned nodes Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of unprovisioned nodes in your data plane that are Compute nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1 . If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. Important Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661 . You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate . Tip For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes . Prerequisites A BareMetalHost CR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating the BareMetalHost CRs for unprovisioned nodes . Procedure Create a file on your workstation named openstack_unprovisioned_compute_node_set.yaml to define the OpenStackDataPlaneNodeSet CR: 1 The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), must start and end with an alphanumeric character, and must have a maximum length of 20 characters. Update the name in this example to a name that reflects the nodes in the set. 2 Optional: A list of environment variables to pass to the pod. Connect the Compute nodes data plane to the control plane network: Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource: Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource: Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openshift-machine-api . Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin . Replace <bmh_label> with the label defined in the corresponding BareMetalHost CR for the node, for example, openstack . Replace <interface> with the control plane interface the node connects to, for example, enp6s0 . Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes: Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets , for example, dataplane-ansible-ssh-private-key-secret . Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce . Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . Enable persistent logging for the data plane nodes: Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster. Specify the management network: Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts . 1 The user associated with the secret you created in Creating the data plane secrets . 2 The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/ . For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273 . For information about how to log into registry.redhat.io , see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6 . Add the network configuration template to apply to your Compute nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes: 1 Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Compute node. 2 Set the edpm_network_config_update variable to true to apply any updates you make to the network configuration after the node set is deployed. Note You must reset the edpm_network_config_update variable to false after the updated network configuration is applied in a new OpenStackDataPlaneDeployment CR, otherwise the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service. For more information about data plane network configuration, see Customizing data plane networks in Configuring network services . Add the common configuration for the set of Compute nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration: Example edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}' For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties for dynamic routing . Define each node in this node set: 1 The node definition reference, for example, edpm-compute-0 . Each node in the node set must have a node definition. 2 Defines the IPAM and the DNS records for the node. 3 Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR. 4 Node-specific Ansible variables that customize the node. 5 Optional: The BareMetalHost CR label that selects the BareMetalHost CR for the data plane node. The label can be any label that is defined for the BareMetalHost CR. The label is used with the bmhLabelSelector label configured in the baremetalSetTemplate definition to select the BareMetalHost for the node. Note Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section. You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many ansibleVars include edpm in the name, which stands for "External Data Plane Management". For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties . In the services section, ensure that the frr and ovn-bgp-agent services are included: Example Save the openstack_unprovisioned_compute_node_set.yaml definition file. Create the data plane resources: Verify that the data plane resources have been created by confirming that the status is SetupReady : When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error. For information about the data plane conditions and states, see Data plane conditions and states . Verify that the Secret resource was created for the node set: Verify that the nodes have transitioned to the provisioned state: Verify that the services were created: 7.4.3. Creating an OpenStackDataPlaneNodeSet CR for Networker nodes using unprovisioned nodes Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Important Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661 . You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate . Tip For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes . Prerequisites A BareMetalHost CR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating the BareMetalHost CRs for unprovisioned nodes . Procedure Create a file on your workstation named openstack_unprovisioned_networker_node_set.yaml to define the OpenStackDataPlaneNodeSet CR: 1 The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), must start and end with an alphanumeric character, and must have a maximum length of 20 characters. Update the name in this example to a name that reflects the nodes in the set. 2 Optional: A list of environment variables to pass to the pod. Connect the Networker nodes on the data plane to the control plane network: Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource: Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource: Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openshift-machine-api . Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin . Replace <bmh_label> with the label defined in the corresponding BareMetalHost CR for the node, for example, openstack . Replace <interface> with the control plane interface the node connects to, for example, enp6s0 . Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes: Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets , for example, dataplane-ansible-ssh-private-key-secret . Create a Persistent Volume Claim (PVC) in the openstack namespace on your RHOCP cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce . For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . Enable persistent logging for the data plane nodes: Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster. Specify the management network: Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts . 1 The user associated with the secret you created in Creating the data plane secrets . 2 The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/ . For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273 . For information about how to log into registry.redhat.io , see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6 . Add the network configuration template to apply to your Networker nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes: 1 Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Networker node. 2 Set the edpm_network_config_update variable to true to apply any updates you make to the network configuration after the node set is deployed. Note You must reset the edpm_network_config_update variable to false after the updated network configuration is applied in a new OpenStackDataPlaneDeployment CR, otherwise the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service. Add the common configuration for the set of Networker nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration: Example edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}' For more information about data plane network configuration, see Customizing data plane networks in Configuring network services . Define each node in this node set: 1 The node definition reference, for example, edpm-compute-0 . Each node in the node set must have a node definition. 2 Defines the IPAM and the DNS records for the node. 3 Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR. 4 Node-specific Ansible variables that customize the node. 5 Optional: The BareMetalHost CR label that selects the BareMetalHost CR for the data plane node. The label can be any label that is defined for the BareMetalHost CR. The label is used with the bmhLabelSelector label configured in the baremetalSetTemplate definition to select the BareMetalHost for the node. Note Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section. You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many ansibleVars include edpm in the name, which stands for "External Data Plane Management". For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties . In the services section, ensure that the frr and ovn-bgp-agent services are included. Note Do not include the ssh-known-hosts service in this node set because it has already been included in the Compute node set CR. This service is included in only one node set CR because it is a global service. Example Save the openstack_unprovisioned_networker_node_set.yaml definition file. Create the data plane resources: Verify that the data plane resources have been created by confirming that the status is SetupReady : When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error. For information about the data plane conditions and states, see Data plane conditions and states . Verify that the Secret resource was created for the node set: Verify that the nodes have transitioned to the provisioned state: Verify that the services were created: 7.4.4. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes for dynamic routing The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set. 7.5. OpenStackDataPlaneNodeSet CR spec properties for dynamic routing The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure. 7.5.1. nodeTemplate Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet . You can override these common attributes in the definition for each individual node. Table 7.1. nodeTemplate properties Field Description ansibleSSHPrivateKeySecret Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret . Default: dataplane-ansible-ssh-private-key-secret edpm_frr_bgp_ipv4_src_network The main IPv4 network used by the OVN BGP agent to communicate with FRRounting (FRR) on the RHOSO data plane. edpm_frr_bgp_ipv6_src_network The main IPv6 network used by the OVN BGP agent to communicate with FRR on the RHOSO data plane. edpm_frr_bgp_neighbor_password The password used to authenticate with the BGP peer. edpm_frr_bgp_uplinks The list of network interfaces used to communicate with the respective BGP peers, for example, nic3 and nic4 . edpm_ovn_bgp_agent_expose_tenant_networks When set to true , tenant networks are exposed to the OVN BGP agent. The default is false . edpm_ovn_encap_ip The IP address that overrides the default IP address used to establish Geneve tunnels between Compute nodes and OVN controllers. The default value for edpm_ovn_encap_ip uses the the tenant network IP address that is assigned to the Compute node. In the following example, an IP address from a network called bgpmainnet overrides the default. The bgpmainnet network is configured on the loopback interface, the interface that BGP advertises: edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}' . managementNetwork Name of the network to use for management (SSH/Ansible). Default: ctlplane networks Network definitions for the OpenStackDataPlaneNodeSet . ansible Ansible configuration options. For more information, see ansible properties . extraMounts The files to mount into an Ansible Execution Pod. userData UserData configuration for the OpenStackDataPlaneNodeSet . networkData NetworkData configuration for the OpenStackDataPlaneNodeSet . 7.5.2. nodes Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet . Overrides the common attributes defined in the nodeTemplate . Table 7.2. nodes properties Field Description ansible Ansible configuration options. For more information, see ansible properties . edpm_frr_bgp_peers 100.64.0.5 100.65.0.5 edpm_ovn_bgp_agent_local_ovn_peer_ips 100.64.0.5 100.65.0.5 extraMounts The files to mount into an Ansible Execution Pod. hostName The node name. managementNetwork Name of the network to use for management (SSH/Ansible). networkData NetworkData configuration for the node. networks Instance networks. userData Node-specific user data. 7.5.3. ansible Defines the group of Ansible configuration options. Table 7.3. ansible properties Field Description ansibleUser The user associated with the secret you created in Creating the data plane secrets . Default: rhel-user ansibleHost SSH host for the Ansible connection. ansiblePort SSH port for the Ansible connection. ansibleVars The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each edpm-ansible role. For a complete list of Ansible variables by role, see the edpm-ansible documentation . Note The ansibleVars parameters that you can configure for an OpenStackDataPlaneNodeSet CR are determined by the services defined for the OpenStackDataPlaneNodeSet . The OpenStackDataPlaneService CRs call the Ansible playbooks from the edpm-ansible playbook collection , which include the roles that are executed as part of the data plane service. ansibleVarsFrom A list of sources to populate Ansible variables from. Values defined by an AnsibleVars with a duplicate key take precedence. For more information, see ansibleVarsFrom properties . 7.5.4. ansibleVarsFrom Defines the list of sources to populate Ansible variables from. Table 7.4. ansibleVarsFrom properties Field Description prefix An optional identifier to prepend to each key in the ConfigMap . Must be a C_IDENTIFIER. configMapRef The ConfigMap CR to select the ansibleVars from. secretRef The Secret CR to select the ansibleVars from. 7.6. Deploying the data plane for dynamic routing You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane for dynamic routing in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs. Procedure Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR: 1 The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment. Add the OpenStackDataPlaneNodeSet CRs that you have created for the Compute and Networker nodes: Save the openstack_data_plane_deploy.yaml deployment file. Deploy the data plane: You can view the Ansible logs while the deployment executes: If the oc logs command returns an error similar to the following error, increase the --max-log-requests value: Verify that the data plane is deployed: For information about the meaning of the returned status, see Data plane conditions and states . If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment . Map the Compute nodes to the Compute cell that they are connected to: If you did not create additional cells, this command maps the Compute nodes to cell1 . Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane: 7.7. Data plane conditions and states Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress. For an OpenStackDataPlaneNodeSet , until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False . When the deployment succeeds, the Ready condition is set to True . A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True . Table 7.5. OpenStackDataPlaneNodeSet CR conditions Condition Description Ready "True": The OpenStackDataPlaneNodeSet CR is successfully deployed. "False": The deployment is not yet requested or has failed, or there are other failed conditions. SetupReady "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. DeploymentReady "True": The NodeSet has been successfully deployed. InputReady "True": The required inputs are available and ready. NodeSetDNSDataReady "True": DNSData resources are ready. NodeSetIPReservationReady "True": The IPSet resources are ready. NodeSetBaremetalProvisionReady "True": Bare-metal nodes are provisioned and ready. Table 7.6. OpenStackDataPlaneNodeSet status fields Status field Description Deployed "True": The OpenStackDataPlaneNodeSet CR is successfully deployed. "False": The deployment is not yet requested or has failed, or there are other failed conditions. DNSClusterAddresses CtlplaneSearchDomain Table 7.7. OpenStackDataPlaneDeployment CR conditions Condition Description Ready "True": The data plane is successfully deployed. "False": The data plane deployment failed, or there are other failed conditions. DeploymentReady "True": The data plane is successfully deployed. InputReady "True": The required inputs are available and ready. <NodeSet> Deployment Ready "True": The deployment has succeeded for the named NodeSet , indicating all services for the NodeSet have succeeded. <NodeSet> <Service> Deployment Ready "True": The deployment has succeeded for the named NodeSet and Service . Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet . Once all services are complete for a NodeSet , the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets . Table 7.8. OpenStackDataPlaneDeployment status fields Status field Description Deployed "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded. "False": The deployment is not yet requested or has failed, or there are other failed conditions. Table 7.9. OpenStackDataPlaneService CR conditions Condition Description Ready "True": The service has been created and is ready for use. "False": The service has failed to be created. 7.8. Troubleshooting data plane creation and deployment To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set. 7.8.1. Checking the job condition message for a service Each data plane deployment in the environment has associated services. Each of these services has a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly. Procedure Determine the name and status of all deployments: The following example output shows two deployments currently in progress: Determine the name and status of all services and their job condition: The following example output shows all services and their job condition for all current deployments: For information on the job condition messages, see Job condition messages . Filter for the name and service for a specific deployment: Replace <deployment_name> with the name of the deployment to use to filter the services list. The following example filters the list to only show services and their job condition for the data-plane-deploy deployment: 7.8.1.1. Job condition messages AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried: Job not started : The job has not started. Job not found : The job could not be found. Job is running : The job is currently running. Job complete : The job execution is complete. Job error occurred <error_message> : The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message. To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service> . For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm . 7.8.2. Checking the logs for a node set You can access the logs for a node set to check for deployment issues. Procedure Retrieve pods with the OpenStackAnsibleEE label: SSH into the pod you want to check: Pod that is running: Pod that is not running: List the directories in the /runner/artifacts mount: View the stdout for the required artifact:
[ "ssh-keygen -f <key_file_name> -N \"\" -t rsa -b 4096", "oc create secret generic dataplane-ansible-ssh-private-key-secret --save-config --dry-run=client --from-file=ssh-privatekey=<key_file_name> --from-file=ssh-publickey=<key_file_name>.pub [--from-file=authorized_keys=<key_file_name>.pub] -n openstack -o yaml | oc apply -f -", "ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''", "oc create secret generic nova-migration-ssh-key --save-config --from-file=ssh-privatekey=nova-migration-ssh-key --from-file=ssh-publickey=nova-migration-ssh-key.pub -n openstack -o yaml | oc apply -f -", "apiVersion: v1 kind: Secret metadata: name: subscription-manager namespace: openstack data: username: <base64_username> password: <base64_password>", "echo -n <string> | base64", "oc create -f secret_subscription.yaml -n openstack", "oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{\"registry.redhat.io\": {\"<username>\": \"<password>\"}}'", "apiVersion: v1 kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque data: LibvirtPassword: <base64_password>", "echo -n <password> | base64", "oc apply -f secret_libvirt.yaml -n openstack", "oc describe secret dataplane-ansible-ssh-private-key-secret oc describe secret nova-migration-ssh-key oc describe secret subscription-manager oc describe secret redhat-registry oc describe secret libvirt-secret", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-compute-nodes 1 namespace: openstack spec: env: 2 - name: ANSIBLE_FORCE_COLOR value: \"True\"", "spec: networkAttachments: - ctlplane", "preProvisioned: true", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>", "nodeTemplate: extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\"", "nodeTemplate: managementNetwork: ctlplane", "nodeTemplate: ansible: ansibleUser: cloud-admin 1 ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - prefix: registry_ secretRef: name: redhat-registry ansibleVars: 2 edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: []", "nodeTemplate: ansible: ansibleVars: edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 1 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - type: interface name: nic3 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30 - type: interface name: nic4 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30 - type: interface name: lo addresses: - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32", "edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'", "nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.0.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.0.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.0.2 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.1.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.1.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.1.2 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com", "services: - download-cache - bootstrap - configure-network - validate-network - frr - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ovn - neutron-metadata - ovn-bgp-agent - libvirt - nova", "oc create --save-config -f openstack_compute_node_set.yaml -n openstack", "oc wait openstackdataplanenodeset openstack-compute-nodes --for condition=SetupReady --timeout=10m", "oc get secret | grep openstack-compute-nodes dataplanenodeset-openstack-compute-nodes Opaque 1 3m50s", "oc get openstackdataplaneservice -n openstack NAME AGE download-cache 46m bootstrap 46m configure-network 46m validate-network 46m frr 46m install-os 46m", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-networker-nodes 1 namespace: openstack spec: env: 2 - name: ANSIBLE_FORCE_COLOR value: \"True\"", "spec: networkAttachments: - ctlplane", "preProvisioned: true", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>", "nodeTemplate: extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\"", "nodeTemplate: managementNetwork: ctlplane", "nodeTemplate: ansible: ansibleUser: cloud-admin 1 ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - prefix: registry_ secretRef: name: redhat-registry ansibleVars: 2 edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: []", "nodeTemplate: ansible: ansibleVars: edpm_network_config_os_net_config_mappings: edpm-networker-0: nic1: 52:54:04:60:55:22 1 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - type: interface name: nic3 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30 - type: interface name: nic4 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30 - type: interface name: lo addresses: - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32", "edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'", "nodes: edpm-networker-0: 1 hostName: edpm-networker-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.0.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.0.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.0.2 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-networker-0.example.com edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.0.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.0.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.0.2 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com", "services: - download-cache - bootstrap - configure-network - validate-network - frr - install-os - configure-os - run-os - reboot-os - install-certs - ovn - neutron-metadata - ovn-bgp-agent", "oc create --save-config -f openstack_networker_node_set.yaml -n openstack", "oc wait openstackdataplanenodeset openstack-networker-nodes --for condition=SetupReady --timeout=10m", "oc get secret | grep openstack-networker-nodes dataplanenodeset-openstack-networker-nodes Opaque 1 3m50s", "oc get openstackdataplaneservice -n openstack NAME AGE download-cache 46m bootstrap 46m configure-network 46m validate-network 46m frr 46m install-os 46m", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-compute-nodes namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane preProvisioned: true nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\" managementNetwork: ctlplane ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - prefix: registry_ secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: [] edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"watchAllNamespaces\": true }}'", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"virtualMediaViaExternalNetwork\": true }}'", "apiVersion: v1 kind: Secret metadata: name: edpm-compute-0-bmc-secret namespace: openstack type: Opaque data: username: <base64_username> password: <base64_password>", "echo -n <string> | base64", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: edpm-compute-0 namespace: openstack labels: app: openstack workload: compute spec: bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d 1 credentialsName: edpm-compute-0-bmc-secret 2 bootMACAddress: 00:c7:e4:a7:e7:f3 bootMode: UEFI online: false", "oc create -f bmh_nodes.yaml", "oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 Available openstack-edpm true 2d21h edpm-compute-1 Available openstack-edpm true 2d21h", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-compute-nodes 1 namespace: openstack spec: tlsEnabled: true env: 2 - name: ANSIBLE_FORCE_COLOR value: \"True\"", "spec: networkAttachments: - ctlplane", "preProvisioned: false", "baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface> dnsSearchDomains: - osptest.openstack.org", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>", "nodeTemplate: extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\"", "nodeTemplate: managementNetwork: ctlplane", "nodeTemplate: ansible: ansibleUser: cloud-admin 1 ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: 2 edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: []", "nodeTemplate: ansible: ansibleVars: edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 1 edpm-compute-1: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_update: false 2 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - type: interface name: nic3 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30 - type: interface name: nic4 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30 - type: interface name: lo addresses: - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32", "edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'", "nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.0.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.0.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.0.2 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com bmhLabelSelector: 5 nodeName: edpm-compute-0 edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.1.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.1.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.1.2 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com bmhLabelSelector: nodeName: edpm-compute-1", "services: - download-cache - bootstrap - configure-network - validate-network - frr - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ovn - neutron-metadata - ovn-bgp-agent - libvirt - nova", "oc create --save-config -f openstack_unprovisioned_compute_node_set.yaml -n openstack", "oc wait openstackdataplanenodeset openstack-compute-nodes --for condition=SetupReady --timeout=10m", "oc get secret -n openstack | grep openstack-compute-nodes dataplanenodeset-openstack-compute-nodes Opaque 1 3m50s", "oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-compute-nodes true 3d21h", "oc get openstackdataplaneservice -n openstack NAME AGE download-cache 8m40s bootstrap 8m40s configure-network 8m40s validate-network 8m40s frr 8m40s install-os 8m40s", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-networker-nodes 1 namespace: openstack spec: tlsEnabled: true env: 2 - name: ANSIBLE_FORCE_COLOR value: \"True\"", "spec: networkAttachments: - ctlplane", "preProvisioned: false", "baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface> dnsSearchDomains: - osptest.openstack.org", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>", "nodeTemplate: extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\"", "nodeTemplate: managementNetwork: ctlplane", "nodeTemplate: ansible: ansibleUser: cloud-admin 1 ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: 2 edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: []", "nodeTemplate: ansible: ansibleVars: edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 1 edpm-compute-1: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_update: false 2 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - type: interface name: nic3 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30 - type: interface name: nic4 use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30 - type: interface name: lo addresses: - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32", "edpm_frr_bgp_ipv4_src_network: bgpmainnet edpm_frr_bgp_neighbor_password: f00barZ edpm_frr_bgp_uplinks: - nic3 - nic4 edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'", "nodes: edpm-networker-0: 1 hostName: edpm-networker-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.0.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.0.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.0.2 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-networker-0.example.com bmhLabelSelector: 5 nodeName: edpm-networker-0 edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 - name: BgpNet0 subnetName: subnet0 fixedIP: 100.64.0.2 - name: BgpNet1 subnetName: subnet0 fixedIP: 100.65.0.2 - name: BgpMainNet subnetName: subnet0 fixedIP: 172.30.0.2 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com bmhLabelSelector: nodeName: edpm-networker-1", "services: - download-cache - bootstrap - configure-network - validate-network - frr - install-os - configure-os - run-os - reboot-os - install-certs - ovn - neutron-metadata - ovn-bgp-agent", "oc create --save-config -f openstack_unprovisioned_networker_node_set.yaml -n openstack", "oc wait openstackdataplanenodeset openstack-networker-nodes --for condition=SetupReady --timeout=10m", "oc get secret -n openstack | grep openstack-networker-nodes dataplanenodeset-openstack-networker-nodes Opaque 1 3m50s", "oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-networker-nodes true 3d21h", "oc get openstackdataplaneservice -n openstack NAME AGE download-cache 9m17s bootstrap 9m17s configure-network 9m17s validate-network 9m17s frr 9m17s install-os 9m17s", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-compute-nodes namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane preProvisioned: false baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: openshift-machine-api cloudUserName: cloud-admin bmhLabelSelector: app: openstack ctlplaneInterface: enp1s0 dnsSearchDomains: - osptest.openstack.org nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\" managementNetwork: ctlplane ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: [] edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 edpm-compute-1: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy 1 namespace: openstack", "spec: nodeSets: - openstack-compute-nodes - openstack-networker-nodes", "oc create -f openstack_data_plane_deploy.yaml -n openstack", "oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10", "error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit", "oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE data-plane-deploy True Setup Complete oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-compute-nodes True NodeSet Ready openstack-networker-nodes True NodeSet Ready", "oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose", "oc rsh -n openstack openstackclient openstack hypervisor list", "oc get openstackdataplanedeployment", "oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE data-plane-deploy [\"openstack-compute-nodes\"] False Deployment in progress data-plane-deploy [\"openstack-networker-nodes\"] False Deployment in progress", "oc get openstackansibleee", "oc get openstackansibleee NAME NETWORKATTACHMENTS STATUS MESSAGE bootstrap-openstack-edpm [\"ctlplane\"] True Job complete download-cache-openstack-edpm [\"ctlplane\"] False Job is running repo-setup-openstack-edpm [\"ctlplane\"] True Job complete validate-network-another-osdpd [\"ctlplane\"] False Job is running", "oc get openstackansibleee -l openstackdataplanedeployment=<deployment_name>", "oc get openstackansibleee -l openstackdataplanedeployment=data-plane-deploy NAME NETWORKATTACHMENTS STATUS MESSAGE bootstrap-openstack-edpm [\"ctlplane\"] True Job complete download-cache-openstack-edpm [\"ctlplane\"] False Job is running repo-setup-openstack-edpm [\"ctlplane\"] True Job complete", "oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13s", "oc rsh validate-network-edpm-compute-6g7n9", "oc debug configure-network-edpm-compute-j6r4l", "ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-compute", "cat /runner/artifacts/configure-network-edpm-compute/stdout" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_dynamic_routing_environment/assembly_creating-the-data-plane
probe::netdev.open
probe::netdev.open Name probe::netdev.open - Called when the device is opened Synopsis netdev.open Values dev_name The device that is going to be opened
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-open
Chapter 5. Creating system images by using RHEL image builder web console interface
Chapter 5. Creating system images by using RHEL image builder web console interface RHEL image builder is a tool for creating custom system images. To control RHEL image builder and create your custom system images, you can use the web console interface. 5.1. Accessing the RHEL image builder dashboard in the RHEL web console With the cockpit-composer plugin for the RHEL web console, you can manage image builder blueprints and composes using a graphical interface. Prerequisites You must have root access to the system. You installed RHEL image builder. You installed the cockpit-composer package. Procedure On the host, open https://<_localhost_>:9090/ in a web browser. Log in to the web console as the root user. To display the RHEL image builder controls, click the Image Builder button, in the upper-left corner of the window. The RHEL image builder dashboard opens, listing existing blueprints, if any. Additional resources Managing systems using the RHEL 8 web console 5.2. Creating a blueprint in the web console interface Creating a blueprint is a necessary step before you build your customized RHEL system image. All the customizations available are optional. You can create a customized blueprint by using the following options: Using the CLI. See Supported image customizations . Using the web console. Follow the steps: Note These blueprint customizations are available for Red Hat Enterprise Linux 9.2 or later versions and Red Hat Enterprise Linux 8.8 or later versions. Prerequisites You have opened the RHEL image builder app from the web console in a browser. See Accessing RHEL image builder GUI in the RHEL web console . Procedure Click Create Blueprint in the upper-right corner. A dialog wizard with fields for the blueprint name and description opens. On the Details page: Enter the name of the blueprint and, optionally, its description. Click . Optional: In the Packages page: On the Available packages search, enter the package name Click the > button to move it to the Chosen packages field. Repeat the steps to search and include as many packages as you want. Click . Note These customizations are all optional unless otherwise specified. On the Kernel page, enter a kernel name and the command-line arguments. On the File system page, you can select Use automatic partitioning or Manually configure partitions for your image file system. For manually configuring the partitions, complete the following steps: Click the Manually configure partitions button. The Configure partitions section opens, showing the configuration based on Red Hat standards and security guides. From the dropdown menu, provide details to configure the partitions: For the Mount point field, select one of the following mount point type options: / - the root mount point /app /boot /data /home /opt /srv /usr /usr/local /var You can also add an additional path to the Mount point , such as /tmp . For example: /var as a prefix and /tmp as an additional path results in /var/tmp . Note Depending on the Mount point type you choose, the file system type changes to xfs . For the Minimum size partition field of the file system, enter the needed minimum partition size. In the Minimum size dropdown menu, you can use common size units such as GiB , MiB , or KiB . The default unit is GiB . Note Minimum size means that RHEL image builder can still increase the partition sizes, in case they are too small to create a working image. To add more partitions, click the Add partition button. If you see the following error message: Duplicate partitions: Only one partition at each mount point can be created. , you can: Click the Remove button to remove the duplicated partition. Choose a new mount point for the partition you want to create. After you finish the partitioning configuration, click . On the Services page, you can enable or disable services: Enter the service names you want to enable or disable, separating them by a comma, by space, or by pressing the Enter key. Click . Enter the Enabled services . Enter the Disabled services . On the Firewall page, set up your firewall setting: Enter the Ports , and the firewall services you want to enable or disable. Click the Add zone button to manage your firewall rules for each zone independently. Click . On the Users page, add a users by following the steps: Click Add user . Enter a Username , a Password , and a SSH key . You can also mark the user as a privileged user, by clicking the Server administrator checkbox. Click . On the Groups page, add groups by completing the following steps: Click the Add groups button: Enter a Group name and a Group ID . You can add more groups. Click . On the SSH keys page, add a key: Click the Add key button. Enter the SSH key. Enter a User . Click . On the Timezone page, set your time zone settings: On the Timezone field, enter the time zone you want to add to your system image. For example, add the following time zone format: "US/Eastern". If you do not set a time zone, the system uses Universal Time, Coordinated (UTC) as default. Enter the NTP servers . Click . On the Locale page, complete the following steps: On the Keyboard search field, enter the package name you want to add to your system image. For example: ["en_US.UTF-8"]. On the Languages search field, enter the package name you want to add to your system image. For example: "us". Click . On the Others page, complete the following steps: On the Hostname field, enter the hostname you want to add to your system image. If you do not add a hostname, the operating system determines the hostname. Mandatory only for the Simplifier Installer image: On the Installation Devices field, enter a valid node for your system image. For example: dev/sda1 . Click . Mandatory only when building images for FDO: On the FIDO device onboarding page, complete the following steps: On the Manufacturing server URL field, enter the following information: On the DIUN public key insecure field, enter the insecure public key. On the DIUN public key hash field, enter the public key hash. On the DIUN public key root certs field, enter the public key root certs. Click . On the OpenSCAP page, complete the following steps: On the Datastream field, enter the datastream remediation instructions you want to add to your system image. On the Profile ID field, enter the profile_id security profile you want to add to your system image. Click . Mandatory only when building images that use Ignition: On the Ignition page, complete the following steps: On the Firstboot URL field, enter the package name you want to add to your system image. On the Embedded Data field, drag or upload your file. Click . . On the Review page, review the details about the blueprint. Click Create . The RHEL image builder view opens, listing existing blueprints. 5.3. Importing a blueprint in the RHEL image builder web console interface You can import and use an already existing blueprint. The system automatically resolves all the dependencies. Prerequisites You have opened the RHEL image builder app from the web console in a browser. You have a blueprint that you want to import to use in the RHEL image builder web console interface. Procedure On the RHEL image builder dashboard, click Import blueprint . The Import blueprint wizard opens. From the Upload field, either drag or upload an existing blueprint. This blueprint can be in either TOML or JSON format. Click Import . The dashboard lists the blueprint you imported. Verification When you click the blueprint you imported, you have access to a dashboard with all the customizations for the blueprint that you imported. To verify the packages that have been selected for the imported blueprint, navigate to the Packages tab. To list all the package dependencies, click All . The list is searchable and can be ordered. steps Optional: To modify any customization: From the Customizations dashboard, click the customization you want to make a change. Optionally, you can click Edit blueprint to navigate to all the available customization options. Additional resources Creating a system image by using RHEL image builder in the web console interface 5.4. Exporting a blueprint from the RHEL image builder web console interface You can export a blueprint to use the customizations in another system. You can export the blueprint in the TOML or in the JSON format. Both formats work on the CLI and also in the API interface. Prerequisites You have opened the RHEL image builder app from the web console in a browser. You have a blueprint that you want to export. Procedure On the image builder dashboard, select the blueprint you want to export. Click Export blueprint . The Export blueprint wizard opens. Click the Export button to download the blueprint as a file or click the Copy button to copy the blueprint to the clipboard. Optional: Click the Copy button to copy the blueprint. Verification Open the exported blueprint in a text editor to inspect and review it. 5.5. Creating a system image by using RHEL image builder in the web console interface You can create a customized RHEL system image from a blueprint by completing the following steps. Prerequisites You opened the RHEL image builder app from the web console in a browser. You created a blueprint. Procedure In the RHEL image builder dashboard, click the blueprint tab. On the blueprint table, find the blueprint you want to build an image. On the right side of the chosen blueprint, click Create Image . The Create image dialog wizard opens. On the Image output page, complete the following steps: From the Select a blueprint list, select the image type you want. From the Image output type list, select the image output type you want. Depending on the image type you select, you need to add further details. Click . On the Review page, review the details about the image creation and click Create image . The image build starts and takes up to 20 minutes to complete. Verification After the image finishes building, you can: Download the image. On the RHEL image builder dashboard, click the Node options (β«Ά) menu and select Download image . Download the logs of the image to inspect the elements and verify if any issue is found. On the RHEL image builder dashboard, click the Node options (β«Ά) menu and select Download logs .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/creating-system-images-with-composer-web-console-interface_composing-a-customized-rhel-system-image
Chapter 6. Fixed issues
Chapter 6. Fixed issues The following sections list the issues fixed in AMQ Streams 1.8.x. Red Hat recommends that you upgrade to the latest patch release For details of the issues fixed in Kafka 2.8.0, refer to the Kafka 2.8.0 Release Notes . 6.1. Fixed issues for AMQ Streams 1.8.4 The AMQ Streams 1.8.4 patch release is now available. The AMQ Streams product images have been upgraded to version 1.8.4. For additional details about the issues resolved in AMQ Streams 1.8.4, see AMQ Streams 1.8.x Resolved Issues . Log4j2 vulnerability The 1.8.4 release fixes a remote code execution vulnerability for AMQ Streams components that use log4j2. The vulnerability could allow a remote code execution on the server if the system logs a string value from an unauthorized source. This affects log4j versions between 2.0 and 2.14.1. For more information, see CVE-2021-44228 . 6.2. Fixed issues for AMQ Streams 1.8.0 Issue Number Description ENTMQST-1529 FileStreamSourceConnector stops when using a large file. ENTMQST-2359 Kafka Bridge does not handle assignment and subscription. ENTMQST-2453 The kafka-exporter pod restarts for no reason. ENTMQST-2459 Running Kafka Exporter leads to high CPU usage. ENTMQST-2511 Fine tune the health checks to stop Kafka Exporter restarting during rolling updates. ENTMQST-2777 ENTMQST-2777 Custom Bridge labels are not set when the service template is not specified. ENTMQST-2974 Changing the log level for Kafka Connect connectors only works temporarily. Table 6.1. Fixed common vulnerabilities and exposures (CVEs) Issue Number Description ENTMQST-1934 CVE-2020-9488 log4j: improper validation of certificate with host mismatch in SMTP appender [amq-st-1]. ENTMQST-2613 CVE-2020-13949 libthrift: potential DoS when processing untrusted payloads [amq-st-1]. ENTMQST-2617 CVE-2021-21290 netty: Information disclosure via the local system temporary directory [amq-st-1]. ENTMQST-2647 CVE-2021-21295 netty: possible request smuggling in HTTP/2 due missing validation [amq-st-1]. ENTMQST-2663 CVE-2021-27568 json-smart: uncaught exception may lead to crash or information disclosure [amq-st-1]. ENTMQST-2711 ENTMQST-2711 CVE-2021-21409 netty: Request smuggling via content-length header [amq-st-1]. ENTMQST-2821 CVE-2021-28168 jersey-common: jersey: Local information disclosure via system temporary directory [amq-st-1]. ENTMQST-2867 CVE-2021-29425 commons-io: apache-commons-io: Limited path traversal in Apache Commons IO 2.2 to 2.6 [amq-st-1]. ENTMQST-2908 ENTMQST-2908 CVE-2021-28165 jetty-server: jetty: Resource exhaustion when receiving an invalid large TLS frame [amq-st-1]. ENTMQST-2909 CVE-2021-28164 jetty-server: jetty: Ambiguous paths can access WEB-INF [amq-st-1]. ENTMQST-2910 CVE-2021-28163 jetty-server: jetty: Symlink directory exposes webapp directory contents [amq-st-1]. ENTMQST-2980 CVE-2021-28169 jetty-server: jetty: requests to the ConcatServlet and WelcomeFilter are able to access protected resources within the WEB-INF directory [amq-st-1]. ENTMQST-3023 CVE-2021-34428 jetty-server: jetty: SessionListener can prevent a session from being invalidated breaking logout [amq-st-1].
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/fixed-issues-str
23.6. Storage Devices
23.6. Storage Devices You can install Red Hat Enterprise Linux on a large variety of storage devices. For System z, select Specialized Storage Devices Figure 23.4. Storage devices Basic Storage Devices This option does not apply to System z. Specialized Storage Devices Select Specialized Storage Devices to install Red Hat Enterprise Linux on the following storage devices: Direct access storage devices (DASDs) Multipath devices such as FCP-attachable SCSI LUN with multiple paths Storage area networks (SANs) such as FCP-attachable SCSI LUNs with a single path Use the Specialized Storage Devices option to configure Internet Small Computer System Interface (iSCSI) connections. You cannot use the FCoE (Fiber Channel over Ethernet) option on System z; this option is grayed out. Note Monitoring of LVM and software RAID devices by the mdeventd daemon is not performed during installation. 23.6.1. The Storage Devices Selection Screen The storage devices selection screen displays all storage devices to which anaconda has access. Devices are grouped under the following tabs: Basic Devices Basic storage devices directly connected to the local system, such as hard disk drives and solid-state drives. On System z, this contains activated DASDs. Firmware RAID Storage devices attached to a firmware RAID controller. This does not apply to System z. Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. Important The installer only detects multipath storage devices with serial numbers that are 16 or 32 characters in length. Other SAN Devices Any other devices available on a storage area network (SAN) such as FCP LUNs attached over one single path. Figure 23.5. Select storage devices - Basic Devices Figure 23.6. Select storage devices - Multipath Devices Figure 23.7. Select storage devices - Other SAN Devices The storage devices selection screen also contains a Search tab that allows you to filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN) at which they are accessed. Figure 23.8. The Storage Devices Search Tab The tab contains a drop-down menu to select searching by port, target, WWID, or LUN (with corresponding text boxes for these values). Searching by WWID or LUN requires additional values in the corresponding text box. Each tab presents a list of devices detected by anaconda , with information about the device to help you to identify it. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. For example, the menu on the Multipath Devices tab allows you to specify any of WWID , Capacity , Vendor , Interconnect , and Paths to include among the details presented for each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Figure 23.9. Selecting Columns Each device is presented on a separate row, with a checkbox to its left. Click the checkbox to make a device available during the installation process, or click the radio button at the left of the column headings to select or deselect all the devices listed in a particular screen. Later in the installation process, you can choose to install Red Hat Enterprise Linux onto any of the devices selected here, and can choose to automatically mount any of the other devices selected here as part of the installed system. Note that the devices that you select here are not automatically erased by the installation process. Selecting a device on this screen does not, in itself, place data stored on the device at risk. Note also that any devices that you do not select here to form part of the installed system can be added to the system after installation by modifying the /etc/fstab file. when you have selected the storage devices to make available during installation, click and proceed to Section 23.7, "Setting the Hostname" 23.6.1.1. DASD low-level formatting Any DASDs used must be low-level formatted. The installer detects this and lists the DASDs that need formatting. If any of the DASDs specified interactively in linuxrc or in a parameter or configuration file are not yet low-level formatted, the following confirmation dialog appears: Figure 23.10. Unformatted DASD Devices Found To automatically allow low-level formatting of unformatted online DASDs specify the kickstart command zerombr . Refer to Chapter 32, Kickstart Installations for more details. 23.6.1.2. Advanced Storage Options From this screen you can configure an iSCSI (SCSI over TCP/IP) target or FCP LUNs. Refer to Appendix B, iSCSI Disks for an introduction to iSCSI. Figure 23.11. Advanced Storage Options 23.6.1.2.1. Configure iSCSI parameters To use iSCSI storage devices for the installation, anaconda must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a username and password for CHAP (Challenge Handshake Authentication Protocol) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached ( reverse CHAP ), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP . Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the username and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps as many times as necessary to add all required iSCSI storage. However, you cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. Procedure 23.1. iSCSI discovery Use the iSCSI Discovery Details dialog to provide anaconda with the information that it needs to discover the iSCSI target. Figure 23.12. The iSCSI Discovery Details dialog Enter the IP address of the iSCSI target in the Target IP Address field. Provide a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN contains: the string iqn. (note the period) a date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage a colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example, :diskarrays-sn-a8675309 . A complete IQN therefore resembles: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 , and anaconda pre-populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information on IQNs, refer to 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from http://tools.ietf.org/html/rfc3720#section-3.2.6 and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from http://tools.ietf.org/html/rfc3721#section-1 . Use the drop-down menu to specify the type of authentication to use for iSCSI discovery: Figure 23.13. iSCSI discovery authentication no credentials CHAP pair CHAP pair and a reverse pair If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 23.14. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password field and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 23.15. CHAP pair and a reverse pair Click Start Discovery . Anaconda attempts to discover an iSCSI target based on the information that you provided. If discovery succeeds, the iSCSI Discovered Nodes dialog presents you with a list of all the iSCSI nodes discovered on the target. Each node is presented with a checkbox beside it. Click the checkboxes to select the nodes to use for installation. Figure 23.16. The iSCSI Discovered Nodes dialog Click Login to initiate an iSCSI session. Procedure 23.2. Starting an iSCSI session Use the iSCSI Nodes Login dialog to provide anaconda with the information that it needs to log into the nodes on the iSCSI target and start an iSCSI session. Figure 23.17. The iSCSI Nodes Login dialog Use the drop-down menu to specify the type of authentication to use for the iSCSI session: Figure 23.18. iSCSI session authentication no credentials CHAP pair CHAP pair and a reverse pair Use the credentials from the discovery step If your environment uses the same type of authentication and same username and password for iSCSI discovery and for the iSCSI session, select Use the credentials from the discovery step to reuse these credentials. If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 23.19. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 23.20. CHAP pair and a reverse pair Click Login . Anaconda attempts to log into the nodes on the iSCSI target based on the information that you provided. The iSCSI Login Results dialog presents you with the results. Figure 23.21. The iSCSI Login Results dialog Click OK to continue. 23.6.1.2.2. FCP Devices FCP devices enable IBM System z to use SCSI devices rather than, or in addition to, DASD devices. FCP devices provide a switched fabric topology that enables System z systems to use SCSI LUNs as disk devices in addition to traditional DASD devices. IBM System z requires that any FCP device be entered manually (either in the installation program interactively, or specified as unique parameter entries in the parameter or CMS configuration file) for the installation program to activate FCP LUNs. The values entered here are unique to each site in which they are set up. Notes Interactive creation of an FCP device is only possible in graphical mode. It is not possible to interactively configure an FCP device in a text-only install. Each value entered should be verified as correct, as any mistakes made may cause the system not to operate properly. Use only lower-case letters in hex values. For more information on these values, refer to the hardware documentation check with the system administrator who set up the network for this system. To configure a Fiber Channel Protocol SCSI device, select Add ZFCP LUN and click Add Drive . In the Add FCP device dialog, fill in the details for the 16-bit device number, 64-bit World Wide Port Number (WWPN) and 64-bit FCP LUN. Click the Add button to connect to the FCP device using this information. Figure 23.22. Add FCP Device The newly added device should then be present and usable in the storage device selection screen on the Multipath Devices tab, if you have activated more than one path to the same LUN, or on Other SAN Devices , if you have activated only one path to the LUN. Important The installer requires the definition of a DASD. For a SCSI-only installation, enter none as the parameter interactively during phase 1 of an interactive installation, or add DASD=none in the parameter or CMS configuration file. This satisfies the requirement for a defined DASD parameter, while resulting in a SCSI-only environment.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/storage_devices-s390
18.3. Remote Management over TLS and SSL
18.3. Remote Management over TLS and SSL You can manage virtual machines using the TLS and SSL protocols. TLS and SSL provides greater scalability but is more complicated than SSH (refer to Section 18.2, "Remote Management with SSH" ). TLS and SSL is the same technology used by web browsers for secure connections. The libvirt management connection opens a TCP port for incoming connections, which is securely encrypted and authenticated based on x509 certificates. The following procedures provide instructions on creating and deploying authentication certificates for TLS and SSL management. Procedure 18.1. Creating a certificate authority (CA) key for TLS management Before you begin, confirm that gnutls-utils is installed. If not, install it: Generate a private key, using the following command: After the key is generated, create a signature file so the key can be self-signed. To do this, create a file with signature details and name it ca.info . This file should contain the following: Generate the self-signed key with the following command: After the file is generated, the ca.info file can be deleted using the rm command. The file that results from the generation process is named cacert.pem . This file is the public key (certificate). The loaded file cakey.pem is the private key. For security purposes, this file should be kept private, and not reside in a shared space. Install the cacert.pem CA certificate file on all clients and servers in the /etc/pki/CA/cacert.pem directory to let them know that the certificate issued by your CA can be trusted. To view the contents of this file, run: This is all that is required to set up your CA. Keep the CA's private key safe, as you will need it in order to issue certificates for your clients and servers. Procedure 18.2. Issuing a server certificate This procedure demonstrates how to issue a certificate with the X.509 Common Name (CN) field set to the host name of the server. The CN must match the host name which clients will be using to connect to the server. In this example, clients will be connecting to the server using the URI: qemu:// mycommonname /system , so the CN field should be identical, for this example "mycommoname". Create a private key for the server. Generate a signature for the CA's private key by first creating a template file called server.info . Make sure that the CN is set to be the same as the server's host name: Create the certificate: This results in two files being generated: serverkey.pem - The server's private key servercert.pem - The server's public key Make sure to keep the location of the private key secret. To view the contents of the file, use the following command: When opening this file, the CN= parameter should be the same as the CN that you set earlier. For example, mycommonname . Install the two files in the following locations: serverkey.pem - the server's private key. Place this file in the following location: /etc/pki/libvirt/private/serverkey.pem servercert.pem - the server's certificate. Install it in the following location on the server: /etc/pki/libvirt/servercert.pem Procedure 18.3. Issuing a client certificate For every client (that is to say any program linked with libvirt, such as virt-manager ), you need to issue a certificate with the X.509 Distinguished Name (DN) field set to a suitable name. This needs to be decided on a corporate level. For example purposes, the following information will be used: Create a private key: Generate a signature for the CA's private key by first creating a template file called client.info . The file should contain the following (fields should be customized to reflect your region/location): Sign the certificate with the following command: Install the certificates on the client machine:
[ "yum install gnutls-utils", "certtool --generate-privkey > cakey.pem", "cn = Name of your organization ca cert_signing_key", "certtool --generate-self-signed --load-privkey cakey.pem --template ca.info --outfile cacert.pem", "certtool -i --infile cacert.pem", "certtool --generate-privkey > serverkey.pem", "organization = Name of your organization cn = mycommonname tls_www_server encryption_key signing_key", "certtool --generate-certificate --load-privkey serverkey.pem --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \\ --template server.info --outfile servercert.pem", "certtool -i --infile servercert.pem", "C=USA,ST=North Carolina,L=Raleigh,O=Red Hat,CN=name_of_client", "certtool --generate-privkey > clientkey.pem", "country = USA state = North Carolina locality = Raleigh organization = Red Hat cn = client1 tls_www_client encryption_key signing_key", "certtool --generate-certificate --load-privkey clientkey.pem --load-ca-certificate cacert.pem \\ --load-ca-privkey cakey.pem --template client.info --outfile clientcert.pem", "cp clientkey.pem /etc/pki/libvirt/private/clientkey.pem cp clientcert.pem /etc/pki/libvirt/clientcert.pem" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Remote_management_of_guests-Remote_management_over_TLS_and_SSL
Chapter 1. About the Red Hat OpenStack Platform framework for upgrades
Chapter 1. About the Red Hat OpenStack Platform framework for upgrades The Red Hat OpenStack Platform (RHOSP) framework for upgrades is a workflow to upgrade your RHOSP environment from one long life version to the long life version. This workflow is an in-place solution and the upgrade occurs within your existing environment. 1.1. High-level changes in Red Hat OpenStack Platform 17.1 The following high-level changes occur during the upgrade to Red Hat OpenStack Platform (RHOSP) 17.1: The RHOSP upgrade and the operating system upgrade are separated into two distinct phases. You upgrade RHOSP first, then you upgrade the operating system. You can upgrade a portion of your Compute nodes to RHEL 9.2 while the rest of your Compute nodes remain on RHEL 8.4. This is called a Multi-RHEL environment. With an upgrade to Red Hat Ceph Storage 5, cephadm now manages Red Hat Ceph Storage. versions of Red Hat Ceph Storage were managed by ceph-ansible . You can upgrade your Red Hat Ceph Storage nodes from version 5 to version 6 after the upgrade to RHOSP 17.1 is complete. Otherwise, Red Hat Ceph Storage nodes can remain on version 5 with RHOSP 17.1 until the end of the Red Hat Ceph Storage 5 lifecycle. To upgrade from Red Hat Ceph Storage version 5 to version 6, begin with one of the following procedures for your environment: Director-deployed Red Hat Ceph Storage environments: Updating the cephadm client External Red Hat Ceph Storage cluster environments: Updating the Red Hat Ceph Storage container image By default, the RHOSP overcloud uses Open Virtual Network (OVN) as the default ML2 mechanism driver in versions 16.2 and 17.1. If your RHOSP 16.2 deployment uses the OVS mechanism driver, you must upgrade to 17.1 with the OVS mechanism driver. Do not attempt to change the mechanism driver during the upgrade. After the upgrade, you can migrate from the OVS to the OVN mechanism driver. See Migrating to the OVN mechanism driver . In ML2/OVN deployments, you can enable egress minimum and maximum bandwidth policies for hardware offloaded ports. For more information, see Configuring the Networking service for QoS policies in Configuring Red Hat OpenStack Platform networking . The undercloud and overcloud both run on Red Hat Enterprise Linux 9. 1.2. Changes in Red Hat Enterprise Linux 9 The Red Hat OpenStack Platform (RHOSP) 17.1 uses Red Hat Enterprise Linux (RHEL) 9.2 as the base operating system. As a part of the upgrade process, you will upgrade the base operating system of nodes to RHEL 9.2. Before beginning the upgrade, review the following information to familiarize yourself with RHEL 9: If your system contains packages with RSA/SHA-1 signatures, you must remove them or contact the vendor to get packages with RSA/SHA-256 signatures before you upgrade to RHOSP 17.1. For more information, see SHA-1 deprecation in Red Hat Enterprise Linux 9 . For more information about the latest changes in RHEL 9, see the Red Hat Enterprise Linux 9.2 Release Notes . For more information about the key differences between Red Hat Enterprise Linux 8 and 9, see Considerations in adopting RHEL 9 . For general information about Red Hat Enterprise Linux 9, see Product Documentation for Red Hat Enterprise Linux 9 . For more information about upgrading from RHEL 8 to RHEL 9, see Upgrading from RHEL 8 to RHEL 9 . 1.3. Upgrade framework for long life versions You can use the Red Hat OpenStack Platform (RHOSP) upgrade framework to perform an in-place upgrade path through multiple versions of the overcloud. The goal is to provide you with an opportunity to remain on certain OpenStack versions that are considered long life versions and upgrade when the long life version is available. The Red Hat OpenStack Platform upgrade process also upgrades the version of Red Hat Enterprise Linux (RHEL) on your nodes. This guide provides an upgrade framework through the following versions: Current Version Target Version Red Hat OpenStack Platform 16.2.4 and later Red Hat OpenStack Platform 17.1 latest For detailed support dates and information on the lifecycle support for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Life Cycle . Upgrade paths for long life releases Familiarize yourself with the possible update and upgrade paths before you begin an upgrade. If you are using an environment that is earlier than RHOSP 16.2.4, before you upgrade from major version to major version, you must first update your existing environment to the latest minor release. For example, if your current deployment is Red Hat OpenStack Platform (RHOSP) 16.2.1 on Red Hat Enterprise Linux (RHEL) 8.4, you must perform a minor update to the latest RHOSP 16.2 version before you upgrade to RHOSP 17.1. Note You can view your current RHOSP and RHEL versions in the /etc/rhosp-release and /etc/redhat-release files. Table 1.1. Updates version path Current version Target version RHOSP 16.2.x on RHEL 8.4 RHOSP 16.2 latest on RHEL 8.4 latest RHOSP 17.0.x on RHEL 9.0 RHOSP 17.0 latest on RHEL 9.0 latest RHOSP 17.0.x on RHEL 9.0 RHOSP 17.1 latest on RHEL 9.2 latest RHOSP 17.1.x on RHEL 9.2 RHOSP 17.1 latest on RHEL 9.2 latest For more information, see Performing a minor update of Red Hat OpenStack Platform . Table 1.2. Upgrades version path Current version Target version RHOSP 16.2 on RHEL 8.4 RHOSP 17.1 latest on RHEL 9.2 latest Red Hat provides two options for upgrading your environment to the long life release: In-place upgrade Perform an upgrade of the services in your existing environment. This guide primarily focuses on this option. Parallel migration Create a new RHOSP 17.1 environment and migrate your workloads from your current environment to the new environment. For more information about RHOSP parallel migration, contact Red Hat Global Professional Services. 1.4. Upgrade duration and impact The durations in the following table were recorded in a test environment that consisted of an overcloud with 200 nodes, and 9 Ceph Storage hosts with 17 object storage daemons (OSDs) each. The durations in the table might not apply to all production environments. For example, if your hardware has low specifications or an extended boot period, allow for more time with these durations. Durations also depend on network I/O to container and package content, and disk I/O on the host. To accurately estimate the upgrade duration for each task, perform these procedures in a test environment with hardware that is similar to your production environment. Table 1.3. Duration and impact of In-place upgrade Duration Notes Undercloud upgrade 30 minutes No disruption to the overcloud. Overcloud adoption and preparation 10 minutes for bare metal adoption 30 minutes for upgrade prepare Duration scales based on the size of the overcloud. No disruption to the overcloud. Red Hat Ceph Storage upgrade Ceph upgrade ansible run: 90 minutes total, 10 minutes per node Ceph ansible run for cephadm adoption: 30 minutes total, 3 minutes per node Post ceph upgrade and adoption overcloud upgrade prepare: 20 minutes Duration scales based on the number of Storage hosts and OSDs. Storage performance is degraded. Overcloud OpenStack upgrade 120 minutes Duration scales based on the size of the overcloud. During this process, agents are restarted and API transactions might be lost. Disable client access to the OpenStack API during this stage. Undercloud system upgrade 40 minutes Includes multiple reboots. Reboot times are hardware dependent. Includes SELinux relabeling. Hosts with large numbers of files take significantly longer. No disruption to the overcloud. Overcloud non-Compute host system upgrade 30 minutes for upgrade prepare 40 minutes per host system upgrade Includes multiple reboots. Reboot times are hardware dependent. Includes SELinux relabeling. Hosts with large numbers of files take significantly longer. Performance is degraded. Overcloud Compute host upgrade 40 minutes per batch of hosts 30 minutes for upgrade prepare You run upgrade prepare on select batches of Compute nodes. The duration depends on the number of Compute nodes in each batch. There is no outage. Includes multiple reboots. Reboot times are hardware dependent. Includes SELinux relabeling. Hosts with large numbers of files take significantly longer. To prevent workload outages during the reboot, you can migrate the workloads to another host beforehand. 1.5. Planning and preparation for an in-place upgrade Before you conduct an in-place upgrade of your OpenStack Platform environment, create a plan for the upgrade and accommodate any potential obstacles that might block a successful upgrade. 1.5.1. Familiarize yourself with Red Hat OpenStack Platform 17.1 Before you perform an upgrade, familiarize yourself with Red Hat OpenStack Platform 17.1 to help you understand the resulting environment and any potential version-to-version changes that might affect your upgrade. To familiarize yourself with Red Hat OpenStack Platform 17.1, follow these suggestions: Read the release notes for all versions across the upgrade path and identify any potential aspects that require planning: Components that contain new features Known issues Open the release notes for each version using these links: Red Hat OpenStack Platform 16.2 , which is your source version Red Hat OpenStack Platform 17.1 which is your target version Read the Installing and managing Red Hat OpenStack Platform with director guide for version 17.1 and familiarize yourself with any new requirements and processes in this guide. Install a proof-of-concept Red Hat OpenStack Platform 17.1 undercloud and overcloud. Develop hands-on experience of the target OpenStack Platform version and investigate potential differences between the target version and your current version. 1.5.2. Minor version update requirement To upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to 17.1, your environment must be running RHOSP version 16.2.4 or later. If you are using a version of RHOSP that is earlier than 16.2.4, update the environment to the latest minor version of your current release. For example, update your Red Hat OpenStack Platform 16.2.3 environment to the latest 16.2 version before upgrading to Red Hat OpenStack Platform 17.1. For instructions on performing a minor version update for Red Hat OpenStack Platform 16.2, see Keeping Red Hat OpenStack Platform Updated . 1.5.3. Leapp upgrade usage in Red Hat OpenStack Platform The long-life Red Hat OpenStack Platform upgrade requires a base operating system upgrade from Red Hat Enterprise Linux (RHEL) 8.4 to RHEL 9.2. The upgrade process uses the Leapp utility to perform the upgrade to RHEL 9.2. However, some aspects of the Leapp upgrade are customized to ensure that you are upgrading specifically from RHEL 8.4 to RHEL 9.2. To upgrade your operating system to RHEL 9.2, see Performing the undercloud system upgrade . Limitations For information on potential limitations that might affect your upgrade, see the following sections from the Upgrading from RHEL 8 to RHEL 9 guide: Planning an upgrade Known issues If any known limitations affect your environment, seek advice from the Red Hat Technical Support Team . Troubleshooting For information about troubleshooting potential Leapp issues, see Troubleshooting in Upgrading from RHEL 8 to RHEL 9 . 1.5.4. Storage driver certification Before you upgrade, confirm deployed storage drivers are certified for use with Red Hat OpenStack Platform 17.1. For information on software certified for use with Red Hat OpenStack Platform 17.1, see Software certified for Red Hat OpenStack Platform 17 . 1.5.5. Supported upgrade scenarios Before proceeding with the upgrade, check that your overcloud is supported. Note If you are uncertain whether a particular scenario not mentioned in these lists is supported, seek advice from the Red Hat Technical Support Team . Supported scenarios The following in-place upgrade scenarios are tested and supported: Standard environments with default role types: Controller, Compute, and Ceph Storage OSD Split-Controller composable roles Ceph Storage composable roles Hyper-Converged Infrastructure: Compute and Ceph Storage OSD services on the same node Environments with Network Functions Virtualization (NFV) technologies: Single-root input/output virtualization (SR-IOV) and Data Plane Development Kit (DPDK) Environments with Instance HA enabled Edge and Distributed Compute Node (DCN) scenarios Note During an upgrade procedure, nova live migrations are supported. However, evacuations initiated by Instance HA are not supported. When you upgrade a Compute node, the node is shut down cleanly and any workload running on the node is not evacuated by Instance HA automatically. Instead, you must perform live migration manually. Unsupported scenarios The following in-place upgrade scenarios are not supported: Upgrades with a single Controller node 1.5.6. Red Hat Virtualization upgrade process If you are running your control plane on Red Hat Virtualization, there is no effect on the Red Hat OpenStack Platform (RHOSP) upgrade process. The RHOSP upgrade is the same regardless of whether or not an environment is running on Red Hat Virtualization. 1.5.7. Known issues that might block an upgrade Review the following known issues that might affect a successful upgrade. BZ#2224085 - Leapp is stuck in Interim System when - -debug is specified If you upgrade your operating system from RHEL 7.x to RHEL 8.x, or from RHEL 8.x to RHEL 9.x, do not run a Leapp upgrade with the --debug option. The system remains in the early console in setup code state and does not reboot automatically. To avoid this issue, the UpgradeLeappDebug parameter is set to false by default. Do not change this value in your templates. BZ#2203785 - Collectd sensubility stops working after overcloud node was rebooted. After rebooting an overcloud node, a permission issue causes collectd-sensubility to stop working. As a result, collectd-sensubility stops reporting container health. During an upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to RHOSP 17.1, overcloud nodes are rebooted as part of the Leapp upgrade. To ensure that collectd-sensubility continues to work, run the following command: BZ#2180542 - After upgrade, manila ceph-nfs fails to start with error: ceph-nfs start on leaf1-controller-0 returned 'error' because 'failed' The Pacemaker-controlled ceph-nfs resource requires a runtime directory to store some process data. The directory is created when you install or upgrade RHOSP. Currently, a reboot of the Controller nodes removes the directory, and the ceph-nfs service does not recover when the Controller nodes are rebooted. If all Controller nodes are rebooted, the ceph-nfs service fails permanently. You can apply the following workaround: If you reboot a Controller node, log in to the Controller node and create a /var/run/ceph directory: USD mkdir -p /var/run/ceph Repeat this step on all Controller nodes that have been rebooted. If the ceph-nfs-pacemaker service has been marked as failed, after creating the directory, run the following command from any of the Controller nodes: USD pcs resource cleanup BZ#2210873 - assimilate_{{ tripleo_cephadm_cluster }}.conf required if --crush-hierarchy is used If the CephPools parameter is defined with a set of pool overrides, you must add rule_name: replicated_rule to the definition to avoid pool creation failures during an upgrade from RHOSP 16.2 to 17.1. BZ#2245602 - Upgrade (OSP16.2 ->OSP17.1) controller-0 does not perform leapp upgrade due to packages missing ovn2.15 openvswitch2.15 If you upgrade from Red Hat OpenStack Platform (RHOSP) 13 to 16.1 or 16.2, or from RHOSP 16.2 to 17.1, do not include the system_upgrade.yaml file in the --answers-file answer-upgrade.yaml file. If the system_upgrade.yaml file is included in that file, the environments/lifecycle/upgrade-prepare.yaml file overwrites the parameters in the system_upgrade.yaml file. To avoid this issue, append the system_upgrade.yaml file to the openstack overcloud upgrade prepare command. For example: With this workaround, the parameters that are configured in the system_upgrade.yaml file overwrite the default parameters in the environments/lifecycle/upgrade-prepare.yaml file. BZ#2246409 - (OSP16.2->17.1) Cinder volume NFS mounts on compute nodes are preventing leapp upgrade During an upgrade from RHOSP 16.2 to 17.1, the operating system upgrade from RHEL 8.4 to RHEL 9.2 fails if Cinder volume NFS mounts are present on Compute nodes. Contact your Red Hat support representative for a workaround. BZ#2277756 - rolling update fails unless mon_mds_skip_sanity=true is set During an upgrade from Red Hat Ceph Storage 4 to 5, a known issue prevents Ceph Monitor nodes from being upgraded. After the first Ceph Monitor node is upgraded to version 5, the other Ceph Monitor nodes stop running and report the following message: Before you upgrade your Red Hat Ceph Storage nodes, apply the workaround in the Red Hat Knowledgebase solution RHCS during upgrade RHCS 4 RHCS 5 ceph-mon is failing with "FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)" . After the upgrade is complete, the Red Hat Ceph Storage cluster is adopted by cephadm , which does not require this workaround. BZ#2259891 - During upgrade 16.2-17.1 with not internet on UC overcloud_upgrade_prepare.sh fails pulling registry.access.redhat.com/ubi8/pause In environments where the undercloud is not connected to the internet, an upgrade from Red Hat OpenStack Platform 16.2 to 17.1 fails because the infra_image value is not defined. The overcloud_upgrade_prepare.sh script tries to pull registry.access.redhat.com/ubi8/pause , which causes an error. To avoid this issue, manually add a pause container to your Satellite server: Import a pause container to your Satellite server, for example, k8s.gcr.io/pause:3.5 or registry.access.redhat.com/ubi8/pause . In the /usr/share/containers/containers.conf file, specify the pause container in your local Satellite URL. For example: Replace <LOCAL_SATELLITE_URL/pause:3.5> with your local Satellite URL and the pause container that you imported. Confirm that you can start a pod: BZ#2264543 - (FFU 16.2->17) leapp upgrade of ceph nodes is failing encrypted partition detected When you upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to RHOSP 17.1, the Leapp upgrade of the Red Hat Ceph Storage nodes fails because of an encrypted ceph-osd . Before you run the Leapp upgrade on your Red Hat Ceph Storage nodes, apply the workaround in the Red Hat Knowledgebase solution (FFU 16.2->17) leapp upgrade of ceph nodes is failing encrypted partition detected . BZ#2275097 - bridge_name is not translated to br-ex in osp 17.1 (ffu from 16.2) The bridge_name variable is no longer valid for nic-config templates in RHOSP 17.1. After an upgrade from RHOSP 16.2 to 17.1, if you run a stack update and the nic-config templates still include the bridge_name variable, an outage occurs. Before you upgrade to RHOSP 17.1, you need to rename the bridge_name variable. For more information, see the Red Hat Knowledgebase solution bridge_name is still present in templates during and post FFU causing further updates failure . BZ#2269009 - After cephadm adoption, haproxy fails to start when alertmanager is deployed If you deployed Alertmanager in a director-deployed Red Hat Ceph Storage environment, the upgrade from Red Hat Ceph Storage version 4 to version 5 fails. The failure occurs because HAProxy does not restart after you run the following command to configure cephadm on the Red Hat Ceph Storage nodes: After you run the command, the Red Hat Ceph Storage cluster status is HEALTH_WARN . For a workaround for this issue, see the Red Hat Knowledgebase solution HAProxy does not restart during RHOSP upgrade when RHCS is director-deployed and Alertmanager is enabled . BZ#2278835 - RHCS 6 - BLUESTORE_NO_PER_POOL_OMAP OSD(s) reporting legacy (not per-pool) BlueStore omap usage stats You might see a health warning message similar to the following after upgrading from Red Hat Ceph Storage 5 to 6: You can clear this health warning message by following the instructions in the Red Hat Knowledgebase solution link: RHCS 6 - BLUESTORE_NO_PER_POOL_OMAP OSD(s) reporting legacy (not per-pool) BlueStore omap usage stats . BZ#2225011 - (OSP 16.2 17.1) Undercloud upgrade fails on 'migrate existing introspection data' with lost connection to mysql. If the undercloud upgrade fails, you must restart the mySQL service before you run the undercloud upgrade again. For more information about restarting the mySQL service, see the Red Hat Knowledgebase solution Update from 16.2 to 17.1 failed on migrate existing introspection data in the undercloud . BZ#2269564 - (FFU) Upgrades takes longer time when cloud consist of 350+ nodes The time you will need to upgrade from Red Hat OpenStack Platform 16.2 to 17.1 increases with the number of nodes in a single role. To reduce the amount of time it takes to complete the upgrade, you can split your nodes into multiple roles. For more information, see the Red Hat Knowledgebase article How to split roles during upgrade from RHOSP 16.2 to RHOSP 17.1 . BZ#1947415 - (RHOSP16.1): Unable to delete DEFAULT volume type Starting with RHOSP 16.1.7, deleting the DEFAULT volume type is allowed. However, the DEFAULT volume type is hard coded in the cinder.conf file, and therefore it must exist during a fast forward upgrade. If you deleted the DEFAULT volume type, do not perform an upgrade from RHOSP 16.2 to RHOSP 17.1 until after you perform the workaround described in the Red Hat Knowledgebase solution Performing online database updates failed . BZ#2305981 - OSP16.2 to OSP17.1 upgrade breaks GRUB and makes it try to boot RHEL7 When you upgrade from RHOSP 16.2 to 17.1, during the system upgrade, a known issue causes GRUB to contain RHEL 7 entries instead of RHEL 8 entries. As a result, the hosts cannot reboot. This issue affects environments that previously ran RHOSP 13.0 or earlier. Workaround: See the Red Hat Knowledgebase solution Openstack 16 to 17 FFU - During LEAPP upgrade UEFI systems do not boot due to invalid /boot/grub2/grub.cfg . BZ#2278831 - No disk space check causing unbootable node during leapp upgrade The Leapp version that upgrades Red Hat Enterprise Linux 8.4 to 9.2 does not verify whether all partitions have enough disk space. Before you perform the Red Hat OpenStack Platform system upgrade, you must manually check that all partitions have at least 3 GB of disk space. Failure to do so can cause the node to reboot and enter into an emergency shell. BZ#2259795 - Incorrect validation of Podman version If you perform an upgrade of your RHOSP environment to 17.1.x, the pre-upgrade package_version validation fails because the validation cannot find a matching podman version. Workaround: To skip the package_version validation, use the --skiplist package-version option when you run the pre-upgrade validation: BZ#2295414 -Horizon dashboard internal server error Static file compression does not run automatically after an upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to 17.1. As a result, the missing static files cause the Red Hat OpenStack Platform (RHOSP) dashboard (horizon) to fail. To run the compression manually after the upgrade, see Compressing Red Hat OpenStack Platform dashboard files . BZ#2295407 - Ceph gets the eus repos enabled If you are using director-deployed Red Hat Ceph Storage 5 nodes, during an upgrade from RHOSP 16.2 to 17.1, the EUS repositories that are specified in the UpgradeInitCommand parameter override the repositories in the Red Hat Ceph Storage role. Workaround : To use the repositories listed in your Red Hat Ceph Storage nodes, add the following parameters: In the upgrades-environment.yaml file, add the CephStorageUpgradeInitCommand : In the system_upgrade.yaml file, add the CephStorageUpgradeLeappCommandOptions and CephStorageLeappInitCommand parameters: BZ#2264174 (OSP17.1) Undercloud upgrade hangs during upgrade 16.2 to 17.1 (DB sync) In RHOSP 17.1, you can use the net_config_override variable in the undercloud.conf file to identify an alternate network configuration file. In that alternate file, you must include the IP addresses that are used for the Virtual IPs (VIPs). If the IP addresses are not present in that file, when you run openstack undercloud install , the DB sync hangs. For example: BZ#2320138 (FFU) 16.2->17.1 upgrade fails if extra openvswitch packages are installed An upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to 17.1 fails if additional, non-core openvswitch packages are installed. Workaround: See the Red Hat Knowledgebase solution FFU is failing on Ansible task special treatment for OpenvSwitch . BZ#2224967 RHCS 6.1 Ceph Dashboard Graphs show "No Data" for the Object Gateway After completing an upgrade from 16.2 to 17.1 and Red Hat Ceph Storage 7, no data (N/A) is displayed in the Overall performance section of the OSD dashboard. Workaround: See the Red Hat Knowledgebase solution Ceph dashboard is not showing performance metrics graphs on RHCS 6 . BZ#2346107 grafana.update.checker trying to reach Internet in a disconnected environment After an upgrade from Red Hat Ceph Storage 6 to 7, if you have a disconnected Red Hat OpenStack environment, Grafana attempts to access the internet to download updates. As a result, Grafana times out. Workaround: See BZ#2346107 . FFU upgrade from 16.2.6 to 17.1.4 - openstack leapp upgrade of compute with NVIDIA GPU card failed If you attempt to perform a Leapp OS upgrade with NVIDIA drivers, the system upgrade fails with the following error in /var/log/leapp/leapp-report.txt : Workaround: Remove the NVIDIA driver. For example: Remove the loaded module kernels: Upgrade the Compute node: After the server reboot, re-install the NVIDIA drivers for the appropriate operating system (RHEL 9.2). If necessary, re-create the mdev devices. 1.5.8. Backup and restore Before you upgrade your Red Hat OpenStack Platform (RHOSP) 16.2 environment, back up the undercloud and overcloud control plane by using one of the following options: Back up your nodes before you perform an upgrade. For more information about backing up nodes before you upgrade, see Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes . Back up the undercloud node after you perform the undercloud upgrade and before you perform the overcloud upgrade. For more information about backing up the undercloud, see Creating a backup of the undercloud node in the Red Hat OpenStack Platform 17.1 Backing up and restoring the undercloud and control plane nodes . Use a third-party backup and recovery tool that suits your environment. For more information about certified backup and recovery tools, see the Red Hat Ecosystem catalog . 1.5.9. Proxy configuration If you use a proxy with your Red Hat OpenStack Platform 16.2 environment, the proxy configuration in the /etc/environment file will persist past the operating system upgrade and the Red Hat OpenStack Platform 17.1 upgrade. For more information about proxy configuration for Red Hat OpenStack Platform 16.2, see Considerations when running the undercloud with a proxy in Installing and managing Red Hat OpenStack Platform with director . For more information about proxy configuration for Red Hat OpenStack Platform 17.1, see Considerations when running the undercloud with a proxy in Installing and managing Red Hat OpenStack Platform with director . 1.5.10. Planning for a Compute node upgrade After you upgrade your Compute nodes from Red Hat OpenStack Platform (RHOSP) 16.2 to RHOSP 17.1, you can choose one of the following options to upgrade the Compute host operating system: Keep a portion of your Compute nodes on Red Hat Enterprise Linux (RHEL) 8.4, and upgrade the rest to RHEL 9.2. This is referred to as a Multi-RHEL environment. Upgrade all Compute nodes to RHEL 9.2, and complete the upgrade of the environment. Keep all Compute nodes on RHEL 8.4. The lifecycle of RHEL 8.4 applies. Benefits of a Multi-RHEL environment You must upgrade all of your Compute nodes to RHEL 9.2 to take advantage of any hardware-related features that are only supported in RHOSP 17.1, such as vTPM and Secure Boot. However, you might require that some or all of your Compute nodes remain on RHEL 8.4. For example, if you certified an application for RHEL 8, you can keep your Compute nodes running on RHEL 8.4 to support the application without blocking the entire upgrade. The option to upgrade part of your Compute nodes to RHEL 9.2 gives you more control over your upgrade process. You can prioritize upgrading the RHOSP environment within a planned maintenance window and defer the operating system upgrade to another time. Less downtime is required, which minimizes the impact to end users. Note If you plan to upgrade from RHOSP 17.1 to RHOSP 18.0, you must upgrade all hosts to RHEL 9.2. If you continue to run RHEL 8.4 on your hosts beyond the Extended Life Cycle Support phase, you must obtain a TUS subscription. Limitations of a Multi-RHEL environment The following limitations apply in a Multi-RHEL environment: Compute nodes running RHEL 8 cannot consume NVMe-over-TCP Cinder volumes. You cannot use different paths for socket files on RHOSP 16.2 and 17.1 for collectd monitoring. You cannot mix ML2/OVN and ML2/OVS mechanism drivers. For example, if your RHOSP 16.2 deployment included ML2/OVN, your Multi-RHEL environment must use ML2/OVN. FIPS is not supported in a Multi-RHEL environment. FIPs deployment is a Day 1 operation. FIPS is not supported in RHOSP 16.2. As a result, when you upgrade from RHOSP 16.2 to 17.1, the 17.1 environment does not include FIPS. Instance HA is not supported in a Multi-RHEL environment. Edge topologies are currently not supported. Important All HCI nodes in supported Hyperconverged Infrastructure environments must use the same version of Red Hat Enterprise Linux as the version used by the Red Hat OpenStack Platform controllers. If you wish to use multiple Red Hat Enterprise versions in a hybrid state on HCI nodes in the same Hyperconverged Infrastructure environment, contact the Red Hat Customer Experience and Engagement team to discuss a support exception. Upgrading Compute nodes Use one of the following options to upgrade your Compute nodes: To perform a Multi-RHEL upgrade of your Compute nodes, see Upgrading Compute nodes to a Multi-RHEL environment . To upgrade all Compute nodes to RHEL 9.2, see Upgrading Compute nodes to RHEL 9.2 . If you are keeping all of your Compute nodes on RHEL 8.4, no additional configuration is required. 1.6. Repositories This section contains the repositories for the undercloud and overcloud. Refer to this section when you need to enable repositories in certain situations: Enabling repositories when registering to the Red Hat Customer Portal. Enabling and synchronizing repositories to your Red Hat Satellite Server. Enabling repositories when registering to your Red Hat Satellite Server. 1.6.1. Undercloud repositories You run Red Hat OpenStack Platform (RHOSP) 17.1 on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 8.4 Compute nodes are also supported in a Multi-RHEL environment when upgrading from RHOSP 16.2. Note If you synchronize repositories with Red Hat Satellite, you can enable specific versions of the Red Hat Enterprise Linux repositories. However, the repository remains the same despite the version you choose. For example, you can enable the 9.2 version of the BaseOS repository, but the repository name is still rhel-9-for-x86_64-baseos-eus-rpms despite the specific version you choose. Warning Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Core repositories The following table lists core repositories for installing the undercloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability. Red Hat OpenStack Platform for RHEL 9 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository, which contains packages for Red Hat OpenStack Platform director. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Enterprise Linux 8.4 for x86_64 - BaseOS (RPMs) Telecommunications Update Service (TUS) rhel-8-for-x86_64-baseos-tus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8.4 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-tus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat OpenStack Platform for RHEL 8 x86_64 (RPMs) openstack-17.1-for-rhel-8-x86_64-rpms Core Red Hat OpenStack Platform repository. 1.6.2. Overcloud repositories You run Red Hat OpenStack Platform (RHOSP) 17.1 on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 8.4 Compute nodes are also supported in a Multi-RHEL environment when upgrading from RHOSP 16.2. Note If you synchronize repositories with Red Hat Satellite, you can enable specific versions of the Red Hat Enterprise Linux repositories. However, the repository remains the same despite the version you choose. For example, you can enable the 9.2 version of the BaseOS repository, but the repository name is still rhel-9-for-x86_64-baseos-eus-rpms despite the specific version you choose. Warning Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Controller node repositories The following table lists core repositories for Controller nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs) rhceph-6-tools-for-rhel-9-x86_64-rpms Tools for Red Hat Ceph Storage 6 for Red Hat Enterprise Linux 9. Red Hat Enterprise Linux 8.4 for x86_64 - BaseOS (RPMs) Telecommunications Update Service (TUS) rhel-8-for-x86_64-baseos-tus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8.4 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-tus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Telecommunications Update Service (TUS) rhel-8-for-x86_64-highavailability-tus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform for RHEL 8 x86_64 (RPMs) openstack-17.1-for-rhel-8-x86_64-rpms Core Red Hat OpenStack Platform repository. Compute and ComputeHCI node repositories The following table lists core repositories for Compute and ComputeHCI nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs) rhceph-6-tools-for-rhel-9-x86_64-rpms Tools for Red Hat Ceph Storage 6 for Red Hat Enterprise Linux 9. Red Hat Enterprise Linux 8.4 for x86_64 - BaseOS (RPMs) Telecommunications Update Service (TUS) rhel-8-for-x86_64-baseos-tus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8.4 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-tus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat OpenStack Platform for RHEL 8 x86_64 (RPMs) openstack-17.1-for-rhel-8-x86_64-rpms Core Red Hat OpenStack Platform repository. Ceph Storage node repositories The following table lists Ceph Storage related repositories for the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) rhel-9-for-x86_64-baseos-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat OpenStack Platform Deployment Tools for RHEL 9 x86_64 (RPMs) openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with standalone Ceph Storage subscriptions. If you use a combined OpenStack Platform and Ceph Storage subscription, use the openstack-17.1-for-rhel-9-x86_64-rpms repository. Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with combined Red Hat OpenStack Platform and Red Hat Ceph Storage subscriptions. If you use a standalone Red Hat Ceph Storage subscription, use the openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms repository. Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs) rhceph-6-tools-for-rhel-9-x86_64-rpms Provides tools for nodes to communicate with the Ceph Storage cluster. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. If you are using OVS on Ceph Storage nodes, add this repository to the network interface configuration (NIC) templates. 1.6.3. Red Hat Satellite Server 6 considerations If you use Red Hat Satellite Server 6 to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment and you plan to use Satellite 6 to deliver content during the RHOSP 17.1 upgrade, the following must be true: Your Satellite Server hosts RHOSP 16.2 RPMs and container images. Note If the Red Hat Ceph Storage container image is hosted on a Satellite Server, then you must download a copy of the image to the undercloud before starting the Red Hat Ceph Storage upgrade. To copy this image, see Downloading Red Hat Ceph Storage containers to the undercloud from Satellite . You have registered all nodes in your RHOSP 16.2 environment to your Satellite Server. For example, you used an activation key linked to a RHOSP 16.2 content view to register nodes to RHOSP 16.2 content. Note If you are using an isolated environment where the undercloud does not have access to the internet, a known issue causes an upgrade from Red Hat OpenStack Platform 16.2 to 17.1 to fail. For a workaround, see the known issue for BZ2259891 in Known issues that might block an upgrade . Recommendations for RHOSP upgrades Enable and synchronize the necessary RPM repositories for both the RHOSP 16.2 undercloud and overcloud. This includes the necessary Red Hat Enterprise Linux (RHEL) 9.2 repositories. Create custom products on your Satellite Server to host container images for RHOSP 17.1. Create and promote a content view for RHOSP 17.1 upgrade and include the following content in the content view: RHEL 8 repositories: Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Red Hat Fast Datapath for RHEL 8 (RPMs) RHEL 9 repositories: Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Red Hat Satellite Client 6 for RHEL 9 x86_64 (RPMs) All undercloud and overcloud RPM repositories, including RHEL 9.2 repositories. To avoid issues enabling the RHEL repositories, ensure that you include the correct version of the RHEL repositories, which is 9.2. RHOSP 17.1 container images. Associate an activation key with the RHOSP 17.1 content view that you have created for the RHOSP 17.1 upgrade. Check that no node has the katello-host-tools-fact-plugin package installed. The Leapp upgrade does not upgrade this package. Leaving this package on a RHEL 9.2 system causes subscription-manager to report errors. You can configure Satellite Server to host RHOSP 17.1 container images. To upgrade from RHOSP 16.2 to 17.1, you need the following container images: Container images that are hosted on the rhosp-rhel8 namespace: rhosp-rhel8/openstack-collectd rhosp-rhel8/openstack-nova-libvirt Container images that are hosted on the rhosp-rhel9 namespace. For information about configuring the rhosp-rhel9 namespace container images, see Preparing a Satellite server for container images in Installing and managing Red Hat OpenStack Platform with director . If you use a Red Hat Ceph Storage subscription and have configured director to use the overcloud-minimal image for Red Hat Ceph Storage nodes, on your Satellite Server you must create a content view and add the following RHEL 9.2 repositories to it: Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) For more information, see Importing Content and Managing Content Views in the Red Hat Satellite Managing Content guide.
[ "sudo podman exec -it collectd setfacl -R -m u:collectd:rwx /run/podman", "openstack overcloud upgrade prepare --answers-file answer-upgrade.yaml / -r roles-data.yaml / -n networking-data.yaml / -e system_upgrade.yaml / -e upgrade_environment.yaml /", "\"FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)\"", "infra_image=\"<LOCAL_SATELLITE_URL/pause:3.5>\"", "podman pod create", "openstack overcloud external-upgrade run --skip-tags ceph_ansible_remote_tmp --stack <stack> --tags cephadm_adopt 2>&1", "[WRN] BLUESTORE_NO_PER_POOL_OMAP", "validation run -i inventory.yaml --group pre-upgrade --skiplist package-version", "parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable=* CephStorageUpgradeInitCommand: | sudo subscription-manager repos --disable=* if USD( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms sudo subscription-manager release --set=9.2 else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms sudo subscription-manager release --set=8.4 fi if USD(sudo podman ps | grep -q ceph ) then sudo dnf -y install cephadm fi", "LeappRepoInitCommand: | subscription-manager repos --disable=* CephStorageUpgradeLeappCommandOptions: \"--enablerepo=rhel-9-for-x86_64-baseos-rpms --enablerepo=rhel-9-for-x86_64-appstream-rpms --enablerepo=openstack-17.1-for-rhel-9-x86_64-rpms --enablerepo=fast-datapath-for-rhel-9-x86_64-rpms CephStorageLeappInitCommand: | subscription-manager repos --disable=* subscription-manager release --unset subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms leapp answer --add --section check_vdo.confirm=True leapp answer --add --section check_vdo.no_vdo_devices=True", "network_config: name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: 172.16.0.1 10.0.0.1 domain: lab.example.com ovs_extra: \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: ip_netmask: 192.168.24.1/24 ip_netmask: 192.168.24.2/32 <---------------------------| ip_netmask: 192.168.24.3/32 <---------------------------| members: type: interface name: eth0", "Summary: Leapp has detected that the NVIDIA proprietary driver has been loaded, which also means the nouveau driver is blacklisted. If you upgrade now, you will end up without a graphical session, as the newer kernel won't be able to load the NVIDIA driver module and nouveau will still be blacklisted. Please uninstall the NVIDIA graphics driver before upgrading to make sure you have a graphical session after upgrading.", "sudo dnf remove -y NVIDIA-vGPU-rhel-8.4-525.105.14.x86_64", "rmmod nvidia_vgpu_vfio rmmod nvidia", "openstack overcloud upgrade run --tag system_upgrade --limit <compute-0>", "rhel-8-for-x86_64-appstream-tus-rpms", "rhel-8-for-x86_64-baseos-tus-rpms", "rhel-8-for-x86_64-highavailability-tus-rpms", "fast-datapath-for-rhel-8-x86_64-rpms", "rhel-9-for-x86_64-appstream-eus-rpms", "rhel-9-for-x86_64-baseos-eus-rpms", "satellite-client-6-for-rhel-9-x86_64-rpms", "rhel-9-for-x86_64-appstream-eus-rpms", "rhel-9-for-x86_64-baseos-eus-rpms" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/assembly_about-the-red-hat-openstack-platform-framework-for-upgrades_about-upgrades
Chapter 2. Features
Chapter 2. Features Streams for Apache Kafka 2.7 introduces the features described in this section. Streams for Apache Kafka 2.7 on RHEL is based on Apache Kafka 3.7.0. Note To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project . 2.1. Kafka 3.7.0 support Streams for Apache Kafka now supports and uses Apache Kafka version 3.7.0. Only Kafka distributions built by Red Hat are supported. For upgrade instructions, see the instructions for Streams for Apache Kafka and Kafka upgrades in the following guides: Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper Refer to the Kafka 3.7.0 Release Notes for additional information. Kafka 3.6.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7. We recommend that you perform a rolling update to use the new binaries. Note Kafka 3.7.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. 2.2. KRaft: Support for migrating from ZooKeeper-based to KRaft-based Kafka clusters KRaft mode in Streams for Apache Kafka is a technology preview , with some limitations, but this release introduces a number of new features that support KRaft. To support using KRaft, a new using guide is available: Using Streams for Apache Kafka on RHEL in KRaft mode . If you are using ZooKeeper for metadata management in your Kafka cluster, you can now migrate to using Kafka in KRaft mode. During the migration, you do the following: Install a quorum of controller nodes, which replaces ZooKeeper for management of your cluster. Enable KRaft migration in the controller configuration by setting the zookeeper.metadata.migration.enable flag to true . Enable KRaft migration in the brokers by setting the zookeeper.metadata.migration.enable flag to true . Switch the brokers to using KRaft by adding a broker KRaft role and node ID. Switch the controllers out of migration mode by removing the zookeeper.metadata.migration.enable property. See Migrating to KRaft mode . 2.3. KRaft: Kafka upgrades for the KRaft-based clusters KRaft to KRaft upgrades are now supported. You update the installation files, then configure and restart all Kafka nodes. You then upgrade the KRaft-based Kafka cluster to a newer supported KRaft metadata version. Updating the KRaft metadata version ./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7 See Upgrading KRaft-based Kafka clusters . 2.4. RHEL 7 no longer supported RHEL 7 is no longer supported. The decision was made due to known incompatibility issues .
[ "./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_rhel/features-str
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/snip-conscious-language_integrating
Chapter 16. Backups and Migration
Chapter 16. Backups and Migration 16.1. Backing Up and Restoring the Red Hat Virtualization Manager 16.1.1. Backing up Red Hat Virtualization Manager - Overview Use the engine-backup tool to take regular backups of the Red Hat Virtualization Manager. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine service. 16.1.2. Syntax for the engine-backup Command The engine-backup command works in one of two basic modes: These two modes are further extended by a set of parameters that allow you to specify the scope of the backup and different credentials for the engine database. Run engine-backup --help for a full list of parameters and their function. Basic Options --mode Specifies whether the command will perform a backup operation or a restore operation. Two options are available - backup , and restore . This is a required parameter. --file Specifies the path and name of a file into which backups are to be taken in backup mode, and the path and name of a file from which to read backup data in restore mode. This is a required parameter in both backup mode and restore mode. --log Specifies the path and name of a file into which logs of the backup or restore operation are to be written. This parameter is required in both backup mode and restore mode. --scope Specifies the scope of the backup or restore operation. There are four options: all , which backs up or restores all databases and configuration data; files , which backs up or restores only files on the system; db , which backs up or restores only the Manager database; and dwhdb , which backs up or restores only the Data Warehouse database. The default scope is all . The --scope parameter can be specified multiple times in the same engine-backup command. Manager Database Options The following options are only available when using the engine-backup command in restore mode. The option syntax below applies to restoring the Manager database. The same options exist for restoring the Data Warehouse database. See engine-backup --help for the Data Warehouse option syntax. --provision-db Creates a PostgreSQL database for the Manager database backup to be restored to. This is a required parameter when restoring a backup on a remote host or fresh installation that does not have a PostgreSQL database already configured. --change-db-credentials Allows you to specify alternate credentials for restoring the Manager database using credentials other than those stored in the backup itself. See engine-backup --help for the additional parameters required by this parameter. --restore-permissions or --no-restore-permissions Restores (or does not restore) the permissions of database users. One of these parameters is required when restoring a backup. Note If a backup contains grants for extra database users, restoring the backup with the --restore-permissions and --provision-db (or --provision-dwh-db ) options will create the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See https://access.redhat.com/articles/2686731 . 16.1.3. Creating a Backup with the engine-backup Command The Red Hat Virtualization Manager can be backed up using the engine-backup command while the Manager is active. Append one of the following options to --scope to specify which backup to perform: all : A full backup of all databases and configuration files on the Manager files : A backup of only the files on the system db : A backup of only the Manager database dwhdb : A backup of only the Data Warehouse database Important To restore a database to a fresh installation of Red Hat Virtualization Manager, a database backup alone is not sufficient; the Manager also requires access to the configuration files. Any backup that specifies a scope other than the default, all , must be accompanied by the files scope, or a filesystem backup. Example Usage of the engine-backup Command Log on to the machine running the Red Hat Virtualization Manager. Create a backup: Example 16.1. Creating a Full Backup Example 16.2. Creating a Manager Database Backup Replace the db option with dwhdb to back up the Data Warehouse database. A tar file containing a backup is created using the path and file name provided. The tar files containing the backups can now be used to restore the environment. 16.1.4. Restoring a Backup with the engine-backup Command Restoring a backup using the engine-backup command involves more steps than creating a backup does, depending on the restoration destination. For example, the engine-backup command can be used to restore backups to fresh installations of Red Hat Virtualization, on top of existing installations of Red Hat Virtualization, and using local or remote databases. Important Backups can only be restored to environments of the same major release as that of the backup. For example, a backup of a Red Hat Virtualization version 4.2 environment can only be restored to another Red Hat Virtualization version 4.2 environment. To view the version of Red Hat Virtualization contained in a backup file, unpack the backup file and read the value in the version file located in the root directory of the unpacked files. 16.1.5. Restoring a Backup to a Fresh Installation The engine-backup command can be used to restore a backup to a fresh installation of the Red Hat Virtualization Manager. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the Red Hat Virtualization Manager have been installed, but the engine-setup command has not yet been run. This procedure assumes that the backup file or files can be accessed from the machine on which the backup is to be restored. Restoring a Backup to a Fresh Installation Log on to the Manager machine. If you are restoring the engine database to a remote host, you will need to log on to and perform the relevant actions on that host. Likewise, if also restoring the Data Warehouse to a remote host, you will need to log on to and perform the relevant actions on that host. Restore a complete backup or a database-only backup. Restore a complete backup: If Data Warehouse is also being restored as part of the complete backup, provision the additional database: Restore a database-only backup by restoring the configuration files and database backup: The example above restores a backup of the Manager database. The example above restores a backup of the Data Warehouse database. If successful, the following output displays: Run the following command and follow the prompts to configure the restored Manager: The Red Hat Virtualization Manager has been restored to the version preserved in the backup. To change the fully qualified domain name of the new Red Hat Virtualization system, see Section 22.1.1, "The oVirt Engine Rename Tool" . 16.1.6. Restoring a Backup to Overwrite an Existing Installation The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup. Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes. Procedure Log in to the Manager machine. Remove the configuration files and clean the database associated with the Manager: The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist. Restore a full backup: Restore a database-only backup by restoring the configuration files and the database backup: Note To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter. If successful, the following output displays: Reconfigure the Manager: 16.1.7. Restoring a Backup with Different Credentials The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored. This is useful when you have taken a backup of an installation and want to restore the installation from the backup to a different system. Important When restoring a backup to overwrite an existing installation, you must run the engine-cleanup command to clean up the existing installation before using the engine-backup command. The engine-cleanup command only cleans the engine database, and does not drop the database or delete the user that owns that database. So you do not need to create a new database or specify the database credentials. However, if the credentials for the owner of the engine database are not known, you must change them before you can restore the backup. Restoring a Backup with Different Credentials Log in to the Red Hat Virtualization Manager machine. Run the following command and follow the prompts to remove the Manager's configuration files and to clean the Manager's database: Change the password for the owner of the engine database if the credentials of that user are not known: Enter the postgresql command line: Change the password of the user that owns the engine database: Repeat this for the user that owns the ovirt_engine_history database if necessary. Restore a complete backup or a database-only backup with the --change-db-credentials parameter to pass the credentials of the new database. The database_location for a database local to the Manager is localhost . Note The following examples use a --*password option for each database without specifying a password, which prompts for a password for each database. Alternatively, you can use --*passfile= password_file options for each database to securely pass the passwords to the engine-backup tool without the need for interactive prompts. Restore a complete backup: If Data Warehouse is also being restored as part of the complete backup, include the revised credentials for the additional database: Restore a database-only backup by restoring the configuration files and the database backup: The example above restores a backup of the Manager database. The example above restores a backup of the Data Warehouse database. If successful, the following output displays: Run the following command and follow the prompts to reconfigure the firewall and ensure the ovirt-engine service is correctly configured: 16.1.8. Backing up and Restoring a Self-Hosted Engine You can back up a self-hosted engine and restore it in a new self-hosted environment. Use this procedure for tasks such as migrating the environment to a new self-hosted engine storage domain with a different storage type. When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. The backup and restore operation involves the following key actions: Back up the original Manager using the engine-backup tool. Deploy a new self-hosted engine and restore the backup. Enable the Manager repositories on the new Manager virtual machine. Reinstall the self-hosted engine nodes to update their configuration. Remove the old self-hosted engine storage domain. This procedure assumes that you have access and can make changes to the original Manager. Prerequisites A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager. The original Manager must be updated to the latest minor version. The Manager version in the backup file must match the version of the new Manager. See Updating the Red Hat Virtualization Manager in the Upgrade Guide . There must be at least one regular host in the environment. This host (and any other regular hosts) will remain active to host the SPM role and any running virtual machines. If a regular host is not already the SPM, move the SPM role before creating the backup by selecting a regular host and clicking Management Select as SPM . If no regular hosts are available, there are two ways to add one: Remove the self-hosted engine configuration from a node (but do not remove the node from the environment). See Section 15.7, "Removing a Host from a Self-Hosted Engine Environment" . Add a new regular host. See Section 10.5.1, "Adding Standard Hosts to the Red Hat Virtualization Manager" . 16.1.8.1. Backing up the Original Manager Back up the original Manager using the engine-backup command, and copy the backup file to a separate location so that it can be accessed at any point during the process. For more information about engine-backup --mode=backup options, see Backing Up and Restoring the Red Hat Virtualization Manager in the Administration Guide . Procedure Log in to one of the self-hosted engine nodes and move the environment to global maintenance mode: Log in to the original Manager and stop the ovirt-engine service: Note Though stopping the original Manager from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Manager and the new Manager from simultaneously managing existing resources. Run the engine-backup command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log: Copy the files to an external server. In the following example, storage.example.com is the fully qualified domain name of a network storage server that will store the backup until it is needed, and /backup/ is any designated folder or path. If you do not require the Manager machine for other purposes, unregister it from Red Hat Subscription Manager: Log in to one of the self-hosted engine nodes and shut down the original Manager virtual machine: After backing up the Manager, deploy a new self-hosted engine and restore the backup on the new virtual machine. 16.1.8.2. Restoring the Backup on a New Self-Hosted Engine Run the hosted-engine script on a new host, and use the --restore-from-file= path/to/file_name option to restore the Manager backup during the deployment. Important If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator's ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment: If you are redeploying on an existing host, you must update the host's iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi . The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable. If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host. Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). Procedure Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path. Log in to the new host. If you are deploying on Red Hat Virtualization Host, the self-hosted engine deployment tool is available by default. If you are deploying on Red Hat Enterprise Linux, you must install the package: Red Hat recommends using the screen window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run screen : In the event of session timeout or connection disruption, run screen -d -r to recover the deployment session. Run the hosted-engine script, specifying the path to the backup file: To escape the script at any time, use CTRL + D to abort deployment. Select Yes to begin the deployment. Configure the network. The script detects possible NICs to use as a management bridge for the environment. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance. Specify the FQDN for the Manager virtual machine. Enter the root password for the Manager. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user. Enter the virtual machine's CPU and memory configuration. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you. Enter the virtual machine's networking details. If you specify Static , enter the IP address of the Manager. Important The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine's IP must be in the same subnet range (10.1.1.1-254/24). Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine's /etc/hosts file. You must ensure that the host names are resolvable. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications: Enter a password for the admin@internal user to access the Administration Portal. The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed. Select the type of storage to use: For NFS, enter the version, full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group. Note To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options. For Gluster storage, enter the full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. Important Only replica 3 Gluster storage is supported. Ensure you have the following configuration: In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set rpc-auth-allow-insecure to on . Configure the volume as follows: For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide . Enter the Manager disk size. The script continues until the deployment is complete. The deployment process changes the Manager's SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager's entry from the .ssh/known_hosts file on any client machines that accessed the original Manager. When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories. 16.1.8.3. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node. 16.1.8.4. Reinstalling Hosts Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host. Prerequisites If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the host's usage is relatively low. Ensure that the cluster has sufficient memory reserve in order for its hosts to perform maintenance. If a cluster lacks sufficient memory, the virtual machine migration operation will hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance. Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks. Procedure Click Compute Hosts and select the host. Click Management Maintenance . Click Installation Reinstall to open the Install Host window. Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK to reinstall the host. Once successfully reinstalled, the host displays a status of Up . Any virtual machines that were migrated off the host can now be migrated back to it. Important After a Red Hat Virtualization Host is successfully registered to the Red Hat Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed . Click Management Activate , and the host will change to an Up status and be ready for use. After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes: During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain. 16.1.8.5. Removing a Storage Domain You have a storage domain in your data center that you want to remove from the virtualized environment. Procedure Click Storage Domains . Move the storage domain to maintenance mode and detach it: Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain. Click OK . The storage domain is permanently removed from the environment. 16.1.9. Recovering a Self-Hosted Engine from an Existing Backup If a self-hosted engine is unavailable due to problems that cannot be repaired, you can restore it in a new self-hosted environment using a backup taken before the problem began, if one is available. When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. Restoring a self-hosted engine involves the following key actions: Deploy a new self-hosted engine and restore the backup. Enable the Manager repositories on the new Manager virtual machine. Reinstall the self-hosted engine nodes to update their configuration. Remove the old self-hosted engine storage domain. This procedure assumes that you do not have access to the original Manager, and that the new host can access the backup file. Prerequisites A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager. 16.1.9.1. Restoring the Backup on a New Self-Hosted Engine Run the hosted-engine script on a new host, and use the --restore-from-file= path/to/file_name option to restore the Manager backup during the deployment. Important If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator's ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment: If you are redeploying on an existing host, you must update the host's iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi . The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable. If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host. Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). Procedure Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path. Log in to the new host. If you are deploying on Red Hat Virtualization Host, the self-hosted engine deployment tool is available by default. If you are deploying on Red Hat Enterprise Linux, you must install the package: Red Hat recommends using the screen window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run screen : In the event of session timeout or connection disruption, run screen -d -r to recover the deployment session. Run the hosted-engine script, specifying the path to the backup file: To escape the script at any time, use CTRL + D to abort deployment. Select Yes to begin the deployment. Configure the network. The script detects possible NICs to use as a management bridge for the environment. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance. Specify the FQDN for the Manager virtual machine. Enter the root password for the Manager. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user. Enter the virtual machine's CPU and memory configuration. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you. Enter the virtual machine's networking details. If you specify Static , enter the IP address of the Manager. Important The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine's IP must be in the same subnet range (10.1.1.1-254/24). Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine's /etc/hosts file. You must ensure that the host names are resolvable. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications: Enter a password for the admin@internal user to access the Administration Portal. The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed. Select the type of storage to use: For NFS, enter the version, full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group. Note To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options. For Gluster storage, enter the full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. Important Only replica 3 Gluster storage is supported. Ensure you have the following configuration: In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set rpc-auth-allow-insecure to on . Configure the volume as follows: For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide . Enter the Manager disk size. The script continues until the deployment is complete. The deployment process changes the Manager's SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager's entry from the .ssh/known_hosts file on any client machines that accessed the original Manager. When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories. 16.1.9.2. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node. 16.1.9.3. Reinstalling Hosts Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host. Prerequisites If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the host's usage is relatively low. Ensure that the cluster has sufficient memory reserve in order for its hosts to perform maintenance. If a cluster lacks sufficient memory, the virtual machine migration operation will hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance. Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks. Procedure Click Compute Hosts and select the host. Click Management Maintenance . Click Installation Reinstall to open the Install Host window. Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK to reinstall the host. Once successfully reinstalled, the host displays a status of Up . Any virtual machines that were migrated off the host can now be migrated back to it. Important After a Red Hat Virtualization Host is successfully registered to the Red Hat Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed . Click Management Activate , and the host will change to an Up status and be ready for use. After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes: During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain. 16.1.9.4. Removing a Storage Domain You have a storage domain in your data center that you want to remove from the virtualized environment. Procedure Click Storage Domains . Move the storage domain to maintenance mode and detach it: Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain. Click OK . The storage domain is permanently removed from the environment. 16.1.10. Overwriting a Self-Hosted Engine from an Existing Backup If a self-hosted engine is accessible, but is experiencing an issue such as database corruption, or a configuration error that is difficult to roll back, you can restore the environment to a state using a backup taken before the problem began, if one is available. Restoring a self-hosted engine's state involves the following steps: Place the environment in global maintenance mode. Restore the backup on the Manager virtual machine. Disable global maintenance mode. For more information about engine-backup --mode=restore options, see Section 16.1, "Backing Up and Restoring the Red Hat Virtualization Manager" . 16.1.10.1. Enabling Global Maintenance Mode You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine. Procedure Log in to one of the self-hosted engine nodes and enable global maintenance mode: Confirm that the environment is in maintenance mode before proceeding: You should see a message indicating that the cluster is in maintenance mode. 16.1.10.2. Restoring a Backup to Overwrite an Existing Installation The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup. Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes. Procedure Log in to the Manager machine. Remove the configuration files and clean the database associated with the Manager: The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist. Restore a full backup: Restore a database-only backup by restoring the configuration files and the database backup: Note To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter. If successful, the following output displays: Reconfigure the Manager: 16.1.10.3. Disabling Global Maintenance Mode Procedure Log in to the Manager virtual machine and shut it down. Log in to one of the self-hosted engine nodes and disable global maintenance mode: When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start. Confirm that the environment is running: The listed information includes Engine Status . The value for Engine status should be: Note When the virtual machine is still booting and the Manager hasn't started yet, the Engine status is: If this happens, wait a few minutes and try again. When the environment is running again, you can start any virtual machines that were stopped, and check that the resources in the environment are behaving as expected.
[ "engine-backup --mode=backup", "engine-backup --mode=restore", "engine-backup --scope=all --mode=backup --file= file_name --log= log_file_name", "engine-backup --scope=files --scope=db --mode=backup --file= file_name --log= log_file_name", "engine-backup --mode=restore --file= file_name --log= log_file_name --provision-db --restore-permissions", "engine-backup --mode=restore --file= file_name --log= log_file_name --provision-db --provision-dwh-db --restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --file= file_name --log= log_file_name --provision-db --restore-permissions", "engine-backup --mode=restore --scope=files --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db --restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "engine-cleanup", "engine-backup --mode=restore --file= file_name --log= log_file_name --restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file= file_name --log= log_file_name --restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "engine-cleanup", "su - postgres -c 'scl enable rh-postgresql10 -- psql'", "postgres=# alter role user_name encrypted password ' new_password ';", "engine-backup --mode=restore --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --no-restore-permissions", "engine-backup --mode=restore --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host= database_location --dwh-db-name= database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --no-restore-permissions", "engine-backup --mode=restore --scope=files --scope=dwhdb --file= file_name --log= log_file_name --change-dwh-db-credentials --dwh-db-host= database_location --dwh-db-name= database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "hosted-engine --set-maintenance --mode=global", "systemctl stop ovirt-engine systemctl disable ovirt-engine", "engine-backup --mode=backup --file= file_name --log= log_file_name", "scp -p file_name log_file_name storage.example.com:/backup/", "subscription-manager unregister", "hosted-engine --vm-shutdown", "scp -p file_name host.example.com:/backup/", "yum install ovirt-hosted-engine-setup", "yum install screen screen", "hosted-engine --deploy --restore-from-file=backup/ file_name", "option rpc-auth-allow-insecure on", "gluster volume set _volume_ cluster.quorum-type auto gluster volume set _volume_ network.ping-timeout 10 gluster volume set _volume_ auth.allow \\* gluster volume set _volume_ group virt gluster volume set _volume_ storage.owner-uid 36 gluster volume set _volume_ storage.owner-gid 36 gluster volume set _volume_ server.allow-insecure on", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms", "hosted-engine --vm-status", "scp -p file_name host.example.com:/backup/", "yum install ovirt-hosted-engine-setup", "yum install screen screen", "hosted-engine --deploy --restore-from-file=backup/ file_name", "option rpc-auth-allow-insecure on", "gluster volume set _volume_ cluster.quorum-type auto gluster volume set _volume_ network.ping-timeout 10 gluster volume set _volume_ auth.allow \\* gluster volume set _volume_ group virt gluster volume set _volume_ storage.owner-uid 36 gluster volume set _volume_ storage.owner-gid 36 gluster volume set _volume_ server.allow-insecure on", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms", "hosted-engine --vm-status", "hosted-engine --set-maintenance --mode=global", "hosted-engine --vm-status", "engine-cleanup", "engine-backup --mode=restore --file= file_name --log= log_file_name --restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file= file_name --log= log_file_name --restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "hosted-engine --set-maintenance --mode=none", "hosted-engine --vm-status", "{\"health\": \"good\", \"vm\": \"up\", \"detail\": \"Up\"}", "{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Powering up\"}" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-backups_and_migration
16.5. Disabling and Re-enabling Service Entries
16.5. Disabling and Re-enabling Service Entries Active services can be accessed by other services, hosts, and users within the domain. There can be situations when it is necessary to remove a host or a service from activity. However, deleting a service or a host removes the entry and all the associated configuration, and it removes it permanently. 16.5.1. Disabling Service Entries Disabling a service prevents domain users from access it without permanently removing it from the domain. This can be done by using the service-disable command. For a service, specify the principal for the service. For example: Important Disabling a host entry not only disables that host. It disables every configured service on that host as well. 16.5.2. Re-enabling Services Disabling a service essentially kills its current, active keytabs. Removing the keytabs effectively removes the service from the IdM domain without otherwise touching its configuration entry. To re-enable a service, simply use the ipa-getkeytab command. The -s option sets which IdM server to request the keytab, -p gives the principal name, and -k gives the file to which to save the keytab. For example, requesting a new HTTP keytab:
[ "[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa service-disable HTTP/server.example.com", "ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/service-disable
E.4. IOMMU Strategies and Use Cases
E.4. IOMMU Strategies and Use Cases There are many ways to handle IOMMU groups that contain more devices than intended. For a plug-in card, the first option would be to determine whether installing the card into a different slot produces the intended grouping. On a typical Intel chipset, PCIe root ports are provided via both the processor and the Platform Controller Hub (PCH). The capabilities of these root ports can be very different. Red Hat Enterprise Linux 7 has support for exposing the isolation of numerous PCH root ports, even though many of them do not have native PCIe ACS support. Therefore, these root ports are good targets for creating smaller IOMMU groups. With Intel(R) Xeon(R) class processors (E5 series and above) and "High End Desktop Processors", the processor-based PCIe root ports typically provide native support for PCIe ACS, however the lower-end client processors, such as the CoreTM i3, i5, and i7 and Xeon E3 processors do not. For these systems, the PCH root ports generally provide the most flexible isolation configurations. Another option is to work with the hardware vendors to determine whether isolation is present and quirk the kernel to recognize this isolation. This is generally a matter of determining whether internal peer-to-peer between functions is possible, or in the case of downstream ports, also determining whether redirection is possible. The Red Hat Enterprise Linux 7 kernel includes numerous quirks for such devices and Red Hat Customer Support can help you work with hardware vendors to determine if ACS-equivalent isolation is available and how best to incorporate similar quirks into the kernel to expose this isolation. For hardware vendors, note that multifunction endpoints that do not support peer-to-peer can expose this using a single static ACS table in configuration space, exposing no capabilities. Adding such a capability to the hardware will allow the kernel to automatically detect the functions as isolated and eliminate this issue for all users of your hardware. In cases where the above suggestions are not available, a common reaction is that the kernel should provide an option to disable these isolation checks for certain devices or certain types of devices, specified by the user. Often the argument is made that technologies did not enforce isolation to this extent and everything "worked fine". Unfortunately, bypassing these isolation features leads to an unsupportable environment. Not knowing that isolation exists, means not knowing whether the devices are actually isolated and it is best to find out before disaster strikes. Gaps in the isolation capabilities of devices may be extremely hard to trigger and even more difficult to trace back to device isolation as the cause. VFIO's job is first and foremost to protect the host kernel from user owned devices and IOMMU groups are the mechanism used by VFIO to ensure that isolation. In summary, by being built on top of IOMMU groups, VFIO is able to provide an increased degree of security and isolation between devices than was possible using legacy KVM device assignment. This isolation is now enforced at the Linux kernel level, allowing the kernel to protect itself and prevent dangerous configurations for the user. Additionally, hardware vendors should be encouraged to support PCIe ACS support, not only in multifunction endpoint devices, but also in chip sets and interconnect devices. For existing devices lacking this support, Red Hat may be able to work with hardware vendors to determine whether isolation is available and add Linux kernel support to expose this isolation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/iommu-strategies
Chapter 6. Installing, updating, and uninstalling the password synchronization service
Chapter 6. Installing, updating, and uninstalling the password synchronization service To synchronize passwords between Active Directory and Red Hat Directory Server, you must use the password password synchronization service. This chapter contains information about how the password synchronization service functions, as well as how to install, update, and remove it. 6.1. Understanding how the password synchronization service works When you set up password synchronization with Active Directory, Directory Server retrieves all attributes of user objects except the password. Active Directory stores only encrypted passwords, but Directory Server uses different encryption. As a result, Active Directory users passwords must be encrypted by Directory Server. To enable password synchronization between Active Directory and Directory Server, the Red Hat Directory Password Sync service hooks up into the Windows password changing routine of a DC. If a user or administrator sets or updates a password, the service retrieves the password in plain text before it is encrypted and stored in Active Directory. This process enables Red Hat Directory Password Sync to send the plain text password to Directory Server. To protect the password, the service supports only LDAPS connections to Directory Server. When Directory Server stores the password in the user's entry, the password is automatically encrypted with the password storage scheme configured in Directory Server. Important In an Active Directory, all writable DCs can process password actions. Therefore, you must install Red Hat Directory Password Sync on every writable DC in the Active Directory domain. 6.2. Downloading the password synchronization service installer Before you can install the Red Hat Directory Password Sync service, download the installer from the Customer Portal. Prerequisites A valid Red Hat Directory Server subscription An account on the Red Hat Customer Portal Procedure Log into the Red Hat Customer Portal . Click Downloads at the top of the page. Select Red Hat Directory Server from the product list. Select 11 in the Version field. Download the PassSync Installer . Copy the installer to every writeable Active Directory domain controller (DC). 6.3. Installing the password synchronization service This section describes how to install the Red Hat Directory Password Sync on Windows domain controllers (DC). For further detail, see Section 6.1, "Understanding how the password synchronization service works" . Prerequisites The latest version of the PassSync Installer downloaded to the Windows Active Directory domain controller (DC). For details, see Section 6.2, "Downloading the password synchronization service installer" . A prepared Directory Server host as described in Setting up Synchronization Between Active Directory and Directory Server in the Red Hat Directory Server Administration Guide. Procedure Log in to the Active Directory domain controller with a user that has permissions to install software on the DC. Double-click the RedHat-PassSync-ds11.*-x86_64.msi file to install it. The Red Hat Directory Password Sync Setup appears. Click . Fill the fields according to your Directory Server environment. For example: Fill the following information of the Directory Server host into the fields: Host Name : Sets the name of the Directory Server host. Alternatively, you can set the field to the IPv4 or IPv6 address of the Directory Server host. Port Number : Sets the LDAPS port number. User Name : Sets the distinguished name (DN) of the synchronization user account. Password : Sets the password of the synchronization user. Cert Token : Sets the password of the server certificate copied from the Directory Server host. Search Base : Sets the DN of the Directory Server entry that contains the synchronized user accounts. Click to start the installation. Click Finish . Reboot the Windows DC. Note Without rebooting the DC, the PasswordHook.dll library is not enabled and password synchronization will fail. Set up synchronization between Active Directory and Directory Server as described in the Setting up Synchronization Between Active Directory and Directory Server section in the Red Hat Directory Server Administration Guide. Until the synchronization is fully configured, password synchronization will fail. Repeat this procedure on every writable Windows DC. 6.4. Updating the password synchronization service This section describes how to update an existing Red Hat Directory Password Sync installation on a Windows domain controller (DC). Prerequisites Red Hat Directory Password Sync is running on your Windows DCs. The latest version of the PassSync Installer downloaded to the Windows Active Directory domain controller (DC). For details, see Section 6.2, "Downloading the password synchronization service installer" . Procedure Log in to the Active Directory domain controller with a user that has permissions to install software on the DC. Double-click the RedHat-PassSync-ds11.*-x86_64.msi file. Click to begin installing. Click the Modify button. The setup displays the configuration set during the installation. Click to keep the existing settings. Click to start the installation. Click Finish . Reboot the Windows DC. Note Without rebooting the DC, the PasswordHook.dll library is not enabled and password synchronization will fail. Repeat this procedure on every writable Windows DC. 6.5. Uninstalling the password synchronization service This section contains information about uninstalling the Red Hat Directory Password Sync service from a Windows domain controller (DC). Prerequisites Red Hat Directory Password Sync running on the Windows DC. Procedure Log in to the Active Directory domain controller with a user that has permissions to remove software from the DC. Open the Control Panel Click Programs and then Programs and Features Select the Red Hat Directory Password Sync entry, and click the Uninstall button. Click Yes to confirm.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/installation_guide/assembly_installing-updating-and-uninstalling-the-password-synchronization-service_installation-guide
Chapter 4. Deprecated features
Chapter 4. Deprecated features This section describes features that are supported, but have been deprecated from Red Hat Service Interconnect. Protocols The http and http2 protocols are deprecated and will be removed in a future release when a feature that provides similar observability becomes available. Red Hat recommends using the tcp protocol unless http or http2 observability is required.
null
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/release_notes/deprecated_features
Chapter 13. Notifications overview
Chapter 13. Notifications overview Quay.io supports adding notifications to a repository for various events that occur in the repository's lifecycle. 13.1. Notification actions Notifications are added to the Events and Notifications section of the Repository Settings page. They are also added to the Notifications window, which can be found by clicking the bell icon in the navigation pane of Quay.io. Quay.io notifications can be setup to be sent to a User , Team , or the entire organization . Notifications can be delivered by one of the following methods. E-mail notifications E-mails are sent to specified addresses that describe the specified event. E-mail addresses must be verified on a per-repository basis. Webhook POST notifications An HTTP POST call is made to the specified URL with the event's data. For more information about event data, see "Repository events description". When the URL is HTTPS, the call has an SSL client certificate set from Quay.io. Verification of this certificate proves that the call originated from Quay.io. Responses with the status code in the 2xx range are considered successful. Responses with any other status code are considered failures and result in a retry of the webhook notification. Flowdock notifications Posts a message to Flowdock. Hipchat notifications Posts a message to HipChat. Slack notifications Posts a message to Slack. 13.2. Creating notifications by using the UI Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. Procedure Navigate to a repository on Quay.io. In the navigation pane, click Settings . In the Events and Notifications category, click Create Notification to add a new notification for a repository event. The Create notification popup box appears. On the Create repository popup box, click the When this event occurs box to select an event. You can select a notification for the following types of events: Push to Repository Image build failed Image build queued Image build started Image build success Image build cancelled Image expiry trigger After you have selected the event type, select the notification method. The following methods are supported: Quay Notification E-mail Notification Webhook POST Flowdock Team Notification HipChat Room Notification Slack Notification Depending on the method that you choose, you must include additional information. For example, if you select E-mail , you are required to include an e-mail address and an optional notification title. After selecting an event and notification method, click Create Notification . 13.2.1. Creating an image expiration notification Image expiration event triggers can be configured to notify users through email, Slack, webhooks, and so on, and can be configured at the repository level. Triggers can be set for images expiring in any amount of days, and can work in conjunction with the auto-pruning feature. Image expiration notifications can be set by using the Red Hat Quay v2 UI or by using the createRepoNotification API endpoint. Prerequisites FEATURE_GARBAGE_COLLECTION: true is set in your config.yaml file. Optional. FEATURE_AUTO_PRUNE: true is set in your config.yaml file. Procedure On the Red Hat Quay v2 UI, click Repositories . Select the name of a repository. Click Settings Events and notifications . Click Create notification . The Create notification popup box appears. Click the Select event... box, then click Image expiry trigger . In the When the image is due to expiry in days box, enter the number of days before the image's expiration when you want to receive an alert. For example, use 1 for 1 day. In the Select method... box, click one of the following: E-mail Webhook POST Flowdock Team Notification HipChat Room Notification Slack Notification Depending on which method you chose, include the necessary data. For example, if you chose Webhook POST , include the Webhook URL . Optional. Provide a POST JSON body template . Optional. Provide a Title for your notification. Click Submit . You are returned to the Events and notifications page, and the notification now appears. Optional. You can set the NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES variable in your config.yaml file. with this field set, if there are any expiring images notifications will be sent automatically. By default, this is set to 300 , or 5 hours, however it can be adjusted as warranted. NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1 1 By default, this field is set to 300 , or 5 hours. Verification Click the menu kebab Test Notification . The following message is returned: Test Notification Queued A test version of this notification has been queued and should appear shortly Depending on which method you chose, check your e-mail, webhook address, Slack channel, and so on. The information sent should look similar to the following example: { "repository": "sample_org/busybox", "namespace": "sample_org", "name": "busybox", "docker_url": "quay-server.example.com/sample_org/busybox", "homepage": "http://quay-server.example.com/repository/sample_org/busybox", "tags": [ "latest", "v1" ], "expiring_in": "1 days" } 13.3. Creating notifications by using the API Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following POST /api/v1/repository/{repository}/notification command to create a notification on your repository: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "event": "<event>", "method": "<method>", "config": { "<config_key>": "<config_value>" }, "eventConfig": { "<eventConfig_key>": "<eventConfig_value>" } }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/ This command does not return output in the CLI. Instead, you can enter the following GET /api/v1/repository/{repository}/notification/{uuid} command to obtain information about the repository notification: {"uuid": "240662ea-597b-499d-98bb-2b57e73408d6", "title": null, "event": "repo_push", "method": "quay_notification", "config": {"target": {"name": "quayadmin", "kind": "user", "is_robot": false, "avatar": {"name": "quayadmin", "hash": "b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc", "color": "#17becf", "kind": "user"}}}, "event_config": {}, "number_of_failures": 0} You can test your repository notification by entering the following POST /api/v1/repository/{repository}/notification/{uuid}/test command: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test Example output {} You can reset repository notification failures to 0 by entering the following POST /api/v1/repository/{repository}/notification/{uuid} command: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid> Enter the following DELETE /api/v1/repository/{repository}/notification/{uuid} command to delete a repository notification: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid> This command does not return output in the CLI. Instead, you can enter the following GET /api/v1/repository/{repository}/notification/ command to retrieve a list of all notifications: USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification Example output {"notifications": []} 13.4. Repository events description The following sections detail repository events. Repository Push A successful push of one or more images was made to the repository: Dockerfile Build Queued The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. Dockerfile Build started The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. Dockerfile Build successfully completed The following example is a response from a Dockerfile Build that has been successfully completed by the Build system. Note This event occurs simultaneously with a Repository Push event for the built image or images. Dockerfile Build failed The following example is a response from a Dockerfile Build that has failed. Dockerfile Build cancelled The following example is a response from a Dockerfile Build that has been cancelled.
[ "NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1", "Test Notification Queued A test version of this notification has been queued and should appear shortly", "{ \"repository\": \"sample_org/busybox\", \"namespace\": \"sample_org\", \"name\": \"busybox\", \"docker_url\": \"quay-server.example.com/sample_org/busybox\", \"homepage\": \"http://quay-server.example.com/repository/sample_org/busybox\", \"tags\": [ \"latest\", \"v1\" ], \"expiring_in\": \"1 days\" }", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/", "{\"uuid\": \"240662ea-597b-499d-98bb-2b57e73408d6\", \"title\": null, \"event\": \"repo_push\", \"method\": \"quay_notification\", \"config\": {\"target\": {\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, \"event_config\": {}, \"number_of_failures\": 0}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test", "{}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification", "{\"notifications\": []}", "{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }", "{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }", "{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }", "{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/repository-notifications
20.6. Memory Backing
20.6. Memory Backing Memory backing allows the hypervisor to properly manage large pages within the guest virtual machine. The optional <memoryBacking> element may have an <hugepages> element set within it. This tells the hypervisor that the guest virtual machine should have its memory allocated using hugepages instead of the normal native page size. <domain> ... <memoryBacking> <hugepages/> </memoryBacking> ... </domain> Figure 20.8. Memory backing
[ "<domain> <memoryBacking> <hugepages/> </memoryBacking> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-mem-back
16.6. Additional Resources
16.6. Additional Resources See the /usr/share/doc/squid- <version> /squid.conf.documented file for a list of all configuration parameters you can set in the /etc/squid/squid.conf file together with a detailed description.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/additional-resources-squid
Chapter 8. Managing Users in automation controller
Chapter 8. Managing Users in automation controller Users associated with an organization are shown in the Access tab of the organization. Other users can be added to an organization, including a Normal User , System Auditor , or System Administrator , but they must be created first. You can sort or search the User list by Username , First Name , or Last Name . Click the headers to toggle your sorting preference. You can view user permissions and user type beside the user name on the Users page. 8.1. Creating a user To create new users in automation controller and assign them a role. Procedure On the Users page, click Add . The Create User dialog opens. Enter the appropriate details about your new user. Fields marked with an asterisk (*) are required. Note If you are modifying your own password, log out and log back in again for it to take effect. You can assign three types of users: Normal User : Normal Users have read and write access limited to the resources (such as inventory, projects, and job templates) for which that user has been granted the appropriate roles and privileges. System Auditor : Auditors inherit the read-only capability for all objects within the environment. System Administrator : A System Administrator (also known as a Superuser) has full system administration privileges - with full read and write privileges over the entire installation. A System Administrator is typically responsible for managing all aspects of and delegating responsibilities for day-to-day work to various users. Note A default administrator with the role of System Administrator is automatically created during the installation process and is available to all users of automation controller. One System Administrator must always exist. To delete the System Administrator account, you must first create another System Administrator account. Click Save . When the user is successfully created, the User dialog opens. Click Delete to delete the user, or you can delete users from a list of current users. For more information, see Deleting a user . The same window opens whether you click the user's name, or the Edit icon beside the user. You can use this window to review and modify the User's Organizations , Teams , Roles , and other user membership details. Note If the user is not newly-created, the details screen displays the last login activity of that user. If you log in as yourself, and view the details of your user profile, you can manage tokens from your user profile. For more information, see Adding a user token . 8.2. Deleting a user Before you can delete a user, you must have user permissions. When you delete a user account, the name and email of the user are permanently removed from automation controller. Procedure From the navigation panel, select Access Users . Click Users to display a list of the current users. Select the checkbox for the user that you want to remove. Click Delete . Click Delete in the confirmation warning message to permanently delete the user. 8.3. Displaying user organizations Select a specific user to display the Details page, select the Organizations tab to display the list of organizations of which that user is a member. Note Organization membership cannot be modified from this display panel. 8.4. Displaying a user's teams From the Users > Details page, select the Teams tab to display the list of teams of which that user is a member. Note You cannot modify team membership from this display panel. For more information, see Teams . Until you create a team and assign a user to that team, the assigned teams details for that user is displayed as empty. 8.5. Displaying a user's roles From the Users > Details page, select the Roles tab to display the set of roles assigned to this user. These offer the ability to read, change, and administer projects, inventories, job templates, and other elements. 8.5.1. Adding and removing user permissions To add permissions to a particular user: Procedure From the Users list view, click on the name of a user. On the Details page, click Add . This opens the Add user permissions wizard. Select the object to a assign permissions, for which the user will have access. Click . Select the resource to assign team roles and click . Select the resource you want to assign permissions to. Different resources have different options available. Click Save . The Roles page displays the updated profile for the user with the permissions assigned for each selected resource. Note You can also add teams, individual, or multiple users and assign them permissions at the object level. This includes templates, credentials, inventories, projects, organizations, or instance groups. This feature reduces the time for an organization to onboard many users at one time. To remove permissions: Click the icon to the resource. This launches a confirmation dialog asking you to confirm the disassociation. 8.6. Creating tokens for a user The Tokens tab is only present for the user you created for yourself. Before you add a token for your user, you might want to Create an application if you want to associate your token with it. You can also create a Personal Access Token (PAT) without associating it with any application. Procedure Select your user from the Users list view to configure your OAuth 2 tokens. Select the Tokens tab from your user's profile. Click Add to open the Create Token window. Enter the following information: Application : Enter the name of the application with which you want to associate your token. Alternatively, you can search for the application name clicking the icon. This opens a separate window that enables you to choose from the available options. Use the Search bar to filter by name if the list is extensive. Leave this field blank if you want to create a PAT that is not linked to any application. Optional: Description : Provide a short description for your token. Scope : Specify the level of access you want this token to have. Click Save or Cancel to abandon your changes. After the token is saved, the newly created token for the user is displayed. Important This is the only time the token value and associated refresh token value are ever shown.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/assembly-controller-users
Chapter 1. High-level RHACS installation overview
Chapter 1. High-level RHACS installation overview Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides security services for your self-managed Red Hat OpenShift Kubernetes systems or platforms such as OpenShift Container Platform, Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (Google GKE), and Microsoft Azure Kubernetes Service (Microsoft AKS). For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . 1.1. General installation guidelines To ensure the best installation experience, follow these guidelines: Understand the installation platforms and methods described in this module. Understand Red Hat Advanced Cluster Security for Kubernetes architecture . Check the default resource requirements . 1.2. Installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation methods are supported for all platforms. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information. Table 1.1. Platforms and recommended installation methods Platform type Platform Recommended installation methods Installation steps Managed service platform Red Hat OpenShift Dedicated (OSD) Operator (recommended), Helm charts, or roxctl CLI [1] Installing Central services for RHACS on Red Hat OpenShift Installing Secured Cluster services for RHACS on Red Hat OpenShift Azure Red Hat OpenShift (ARO) Red Hat OpenShift Service on AWS (ROSA) Red Hat OpenShift on IBM Cloud Amazon Elastic Kubernetes Service (Amazon EKS) Helm charts (recommended), or roxctl CLI [1] Installing Central services for RHACS on other platforms Installing Secured Cluster services for RHACS on other platforms Google Kubernetes Engine (Google GKE) Microsoft Azure Kubernetes Service (Microsoft AKS) Self-managed platform Red Hat OpenShift Container Platform (OCP) Operator (recommended), Helm charts, or roxctl CLI [1] Installing Central services for RHACS on Red Hat OpenShift Installing Secured Cluster services for RHACS on Red Hat OpenShift Red Hat OpenShift Kubernetes Engine (OKE) Do not use the roxctl installation method unless you have specific requirements for following this installation method. 1.3. Installation methods for different architectures Red Hat Advanced Cluster Security for Kubernetes (RHACS) supports the following architectures. For information on supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . Additionally, the following table gives information about installation methods available for each architecture. Table 1.2. Architectures and supported installation methods for each architecture Supported architectures Supported installation methods AMD64 Operator (preferred), Helm charts, or roxctl CLI (not recommended) ppc64le (IBM Power) Operator s390x (IBM Z and IBM(R) LinuxONE) AArch64 ( ARM64 ) Operator (preferred), Helm charts, or roxctl CLI 1.4. Installation steps for RHACS on OpenShift Container Platform 1.4.1. Installing RHACS on Red Hat OpenShift by using the RHACS Operator On the Red Hat OpenShift cluster, install the RHACS Operator into the rhacs-operator project, or namespace. On the Red Hat OpenShift cluster that will contain Central, called the central cluster, use the RHACS Operator to install Central services into the stackrox project. One central cluster can secure multiple clusters. Log in to the RHACS web console from the central cluster, and then create an init bundle and download it. The init bundle is then installed on the cluster that you want to secure, called the secured cluster. For the secured cluster: Install the RHACS Operator into the rhacs-operator namespace. On the secured cluster, apply the init bundle that you created in RHACS by performing one of these steps: Use the OpenShift Container Platform web console to import the YAML file of the init bundle that you created. Make sure you are in the stackrox namespace. In the terminal window, run the oc create -f <init_bundle>.yaml -n <stackrox> command, specifying the path to the downloaded YAML file of the init bundle. On the secured cluster, use the RHACS Operator to install Secured Cluster services into the stackrox namespace. When creating these services, be sure to enter the address of Central in the Central Endpoint field so that the secured cluster can communicate with Central. 1.4.2. Installing RHACS on Red Hat OpenShift by using Helm charts Add the RHACS Helm charts repository. Install the central-services Helm chart on the Red Hat OpenShift cluster that will contain Central, called the central cluster. Log in to the RHACS web console on the Central cluster and create an init bundle. For each cluster that you want to secure, log in to the secured cluster and perform the following steps: Apply the init bundle you created with RHACS. To apply the init bundle on the secured cluster, perform one of these steps: Use the OpenShift Container Platform web console to import the YAML file of the init bundle that you created. Make sure you are in the stackrox namespace. In the terminal window, run the oc create -f <init_bundle>.yaml -n <stackrox> command, specifying the path to the downloaded YAML file of the init bundle. Install the secured-cluster-services Helm chart on the secured cluster, specifying the path to the init bundle that you created. 1.4.3. Installing RHACS on Red Hat OpenShift by using the roxctl CLI This installation method is also called the manifest installation method . Install the roxctl CLI. On the Red Hat OpenShift cluster that will contain Central, perform these steps: In the terminal window, run the interactive install command by using the roxctl CLI. Run the setup shell script. In the terminal window, create the Central resources by using the oc create command. Perform one of the following actions: In the RHACS web console, create and download the sensor YAML file and keys. On the secured cluster, use the roxctl sensor generate openshift command. On the secured cluster, run the sensor installation script. 1.5. Installation steps for RHACS on Kubernetes 1.5.1. Installing RHACS on Kubernetes platforms by using Helm charts Add the RHACS Helm charts repository. Install the central-services Helm chart on the cluster that will contain Central, called the Central cluster. Log in to the RHACS web console from the Central cluster and create an init bundle that you will install on the cluster that you want to secure, called the secured cluster. For each secured cluster: Apply the init bundle you created with RHACS. Log in to the secured cluster and run the kubectl create -f <init_bundle>.yaml -n <stackrox> command, specifying the path to the downloaded YAML file of the init bundle. Install the secured-cluster-services Helm chart on the secured cluster, specifying the path to the init bundle that you created earlier. 1.5.2. Installing RHACS on Kubernetes platforms by using the roxctl CLI This installation method is also called the manifest installation method . Install the roxctl CLI. On the Kubernetes cluster that will contain Central, perform these steps: In the terminal window, run the interactive install command by using the roxctl CLI. Run the setup shell script. In the terminal window, create the Central resources by using the kubectl create command. Perform one of the following actions: In the RHACS web console, create and download the sensor YAML file and keys. On the cluster that you want to secure, called the secured cluster, use the roxctl sensor generate openshift command. On the secured cluster, run the sensor installation script.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/installing/high-level-rhacs-installation-overview
Chapter 281. REST Swagger Component
Chapter 281. REST Swagger Component Available as of Camel version 2.19 The rest-swagger configures rest producers from Swagger (Open API) specification document and delegates to a component implementing the RestProducerFactory interface. Currently known working components are: http http4 netty4-http restlet jetty undertow Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest-swagger</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 281.1. URI format rest-swagger:[specificationPath#]operationId Where operationId is the ID of the operation in the Swagger specification, and specificationPath is the path to the specification. If the specificationPath is not specified it defaults to swagger.json . The lookup mechanism uses Camels ResourceHelper to load the resource, which means that you can use CLASSPATH resources ( classpath:my-specification.json ), files ( file:/some/path.json ), the web ( http://api.example.com/swagger.json ) or reference a bean ( ref:nameOfBean ) or use a method of a bean ( bean:nameOfBean.methodName ) to get the specification resource, failing that Swagger's own resource loading support. This component does not act as a HTTP client, it delegates that to another component mentioned above. The lookup mechanism searches for a single component that implements the RestProducerFactory interface and uses that. If the CLASSPATH contains more than one, then the property componentName should be set to indicate which component to delegate to. Most of the configuration is taken from the Swagger specification but the option exists to override those by specifying them on the component or on the endpoint. Typically you would just need to override the host or basePath if those differ from the specification. Note The host parameter should contain the absolute URI containing scheme, hostname and port number, for instance: https://api.example.com With componentName you specify what component is used to perform the requests, this named component needs to be present in the Camel context and implement the required RestProducerFactory interface - as do the components listed at the top. If you do not specify the componentName at either component or endpoint level, CLASSPATH is searched for a suitable delegate. There should be only one component present on the CLASSPATH that implements the RestProducerFactory interface for this to work. This component's endpoint URI is lenient which means that in addition to message headers you can specify REST operation's parameters as endpoint parameters, these will be constant for all subsequent invocations so it makes sense to use this feature only for parameters that are indeed constant for all invocations - for example API version in path such as /api/7.13/users/{id} . 281.2. Options The REST Swagger component supports 9 options, which are listed below. Name Description Default Type basePath (producer) API basePath, for example /v2. Default is unset, if set overrides the value present in Swagger specification. String componentName (producer) Name of the Camel component that will perform the requests. The compnent must be present in Camel registry and it must implement RestProducerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestProducerFactory SPI. Can be overriden in endpoint configuration. String consumes (producer) What payload type this component capable of consuming. Could be one type, like application/json or multiple types as application/json, application/xml; q=0.5 according to the RFC7231. This equates to the value of Accept HTTP header. If set overrides any value found in the Swagger specification. Can be overriden in endpoint configuration String host (producer) Scheme hostname and port to direct the HTTP requests to in the form of https://hostname:port . Can be configured at the endpoint, component or in the correspoding REST configuration in the Camel Context. If you give this component a name (e.g. petstore) that REST configuration is consulted first, rest-swagger , and global configuration last. If set overrides any value found in the Swagger specification, RestConfiguration. Can be overriden in endpoint configuration. String produces (producer) What payload type this component is producing. For example application/json according to the RFC7231. This equates to the value of Content-Type HTTP header. If set overrides any value present in the Swagger specification. Can be overriden in endpoint configuration. String specificationUri (producer) Path to the Swagger specification file. The scheme, host base path are taken from this specification, but these can be overriden with properties on the component or endpoint level. If not given the component tries to load swagger.json resource. Note that the host defined on the component and endpoint of this Component should contain the scheme, hostname and optionally the port in the URI syntax (i.e. https://api.example.com:8080 ). Can be overriden in endpoint configuration. swagger.json URI sslContextParameters (security) Customize TLS parameters used by the component. If not set defaults to the TLS parameters set in the Camel context SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The REST Swagger endpoint is configured using URI syntax: with the following path and query parameters: 281.2.1. Path Parameters (2 parameters): Name Description Default Type specificationUri Path to the Swagger specification file. The scheme, host base path are taken from this specification, but these can be overriden with properties on the component or endpoint level. If not given the component tries to load swagger.json resource. Note that the host defined on the component and endpoint of this Component should contain the scheme, hostname and optionally the port in the URI syntax (i.e. https://api.example.com:8080 ). Overrides component configuration. swagger.json URI operationId Required ID of the operation from the Swagger specification. String 281.2.2. Query Parameters (6 parameters): Name Description Default Type basePath (producer) API basePath, for example /v2. Default is unset, if set overrides the value present in Swagger specification and in the component configuration. String componentName (producer) Name of the Camel component that will perform the requests. The compnent must be present in Camel registry and it must implement RestProducerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestProducerFactory SPI. Overrides component configuration. String consumes (producer) What payload type this component capable of consuming. Could be one type, like application/json or multiple types as application/json, application/xml; q=0.5 according to the RFC7231. This equates to the value of Accept HTTP header. If set overrides any value found in the Swagger specification and. in the component configuration String host (producer) Scheme hostname and port to direct the HTTP requests to in the form of https://hostname:port . Can be configured at the endpoint, component or in the correspoding REST configuration in the Camel Context. If you give this component a name (e.g. petstore) that REST configuration is consulted first, rest-swagger , and global configuration last. If set overrides any value found in the Swagger specification, RestConfiguration. Overrides all other configuration. String produces (producer) What payload type this component is producing. For example application/json according to the RFC7231. This equates to the value of Content-Type HTTP header. If set overrides any value present in the Swagger specification. Overrides all other configuration. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 281.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.rest-swagger.base-path API basePath, for example /v2. Default is unset, if set overrides the value present in Swagger specification. String camel.component.rest-swagger.component-name Name of the Camel component that will perform the requests. The compnent must be present in Camel registry and it must implement RestProducerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestProducerFactory SPI. Can be overriden in endpoint configuration. String camel.component.rest-swagger.consumes What payload type this component capable of consuming. Could be one type, like application/json or multiple types as application/json, application/xml; q=0.5 according to the RFC7231. This equates to the value of Accept HTTP header. If set overrides any value found in the Swagger specification. Can be overriden in endpoint configuration String camel.component.rest-swagger.enabled Enable rest-swagger component true Boolean camel.component.rest-swagger.host Scheme hostname and port to direct the HTTP requests to in the form of https://hostname:port . Can be configured at the endpoint, component or in the correspoding REST configuration in the Camel Context. If you give this component a name (e.g. petstore) that REST configuration is consulted first, rest-swagger , and global configuration last. If set overrides any value found in the Swagger specification, RestConfiguration. Can be overriden in endpoint configuration. String camel.component.rest-swagger.produces What payload type this component is producing. For example application/json according to the RFC7231. This equates to the value of Content-Type HTTP header. If set overrides any value present in the Swagger specification. Can be overriden in endpoint configuration. String camel.component.rest-swagger.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.rest-swagger.specification-uri Path to the Swagger specification file. The scheme, host base path are taken from this specification, but these can be overriden with properties on the component or endpoint level. If not given the component tries to load swagger.json resource. Note that the host defined on the component and endpoint of this Component should contain the scheme, hostname and optionally the port in the URI syntax (i.e. https://api.example.com:8080 ). Can be overriden in endpoint configuration. URI camel.component.rest-swagger.ssl-context-parameters Customize TLS parameters used by the component. If not set defaults to the TLS parameters set in the Camel context. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.rest-swagger.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 281.4. Example: PetStore Checkout the example in the camel-example-rest-swagger project in the examples directory. For example if you wanted to use the PetStore provided REST API simply reference the specification URI and desired operation id from the Swagger specification or download the specification and store it as swagger.json (in the root) of CLASSPATH that way it will be automaticaly used. Let's use the undertow component to perform all the requests and Camels excelent support for Spring Boot. Here are our dependencies defined in Maven POM file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-undertow-starter</artifactId> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest-swagger-starter</artifactId> </dependency> Start by defining the Undertow component and the RestSwaggerComponent : @Bean public Component petstore(CamelContext camelContext, UndertowComponent undertow) { RestSwaggerComponent petstore = new RestSwaggerComponent(camelContext); petstore.setSpecificationUri("http://petstore.swagger.io/v2/swagger.json"); petstore.setDelegate(undertow); return petstore; } Note Support in Camel for Spring Boot will auto create the UndertowComponent Spring bean, and you can configure it using application.properties (or application.yml ) using prefix camel.component.undertow. . We are defining the petstore component here in order to have a named component in the Camel context that we can use to interact with the PetStore REST API, if this is the only rest-swagger component used we might configure it in the same manner (using application.properties ). Now in our application we can simply use the ProducerTemplate to invoke PetStore REST methods: @Autowired ProducerTemplate template; String getPetJsonById(int petId) { return template.requestBodyAndHeaders("petstore:getPetById", null, "petId", petId); }
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest-swagger</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "rest-swagger:[specificationPath#]operationId", "rest-swagger:specificationUri#operationId", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-undertow-starter</artifactId> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest-swagger-starter</artifactId> </dependency>", "@Bean public Component petstore(CamelContext camelContext, UndertowComponent undertow) { RestSwaggerComponent petstore = new RestSwaggerComponent(camelContext); petstore.setSpecificationUri(\"http://petstore.swagger.io/v2/swagger.json\"); petstore.setDelegate(undertow); return petstore; }", "@Autowired ProducerTemplate template; String getPetJsonById(int petId) { return template.requestBodyAndHeaders(\"petstore:getPetById\", null, \"petId\", petId); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/rest-swagger-component
Chapter 8. Viewing and managing Apache Camel applications
Chapter 8. Viewing and managing Apache Camel applications In HawtIO's Camel tab, you view and manage Apache Camel contexts, routes, and dependencies. You can view the following details: A list of all running Camel contexts Detailed information of each Camel context such as Camel version number and runtime statics Lists of all routes in each Camel application and their runtime statistics Graphical representation of the running routes along with real time metrics You can also interact with a Camel application by: Starting and suspending contexts Managing the lifecycle of all Camel applications and their routes, so you can restart, stop, pause, resume, etc. Live tracing and debugging of running routes Browsing and sending messages to Camel endpoints Note The Camel tab is only available when you connect to a container that uses one or more Camel routes. 8.1. Starting, suspending, or deleting a context In the Camel tab's tree view, click Camel Contexts. Check the box to one or more contexts in the list. Click Start or Suspend. To delete a context: Stop the context. Click the ellipse icon and then select Delete from the dropdown menu. Note When you delete a context, you remove it from the deployed application. 8.2. Viewing Camel application details In the Camel tab's tree view, click a Camel application. To view a list of application attributes and values, click Attributes . To view a graphical representation of the application attributes, click Chart and then click Edit to select the attributes that you want to see in the chart. To view inflight and blocked exchanges, click Exchanges . To view application endpoints, click Endpoints . You can filter the list by URL , Route ID , and direction . To view, enable, and disable statistics related to the Camel built-in type conversion mechanism that is used to convert message bodies and message headers to different types, click Type Converters . To view and execute JMX operations, such as adding or updating routes from XML or finding all Camel components available in the classpath, click Operations . 8.3. Viewing a list of the Camel routes and interacting with them To view a list of routes : Click the Camel tab. In the tree view, click the application's routes folder: To start, stop, or delete one or more routes : Check the box to one or more routes in the list. Click Start or Stop . To delete a route, you must first stop it. Then click the ellipse icon and select Delete from the dropdown menu. Note When you delete a route, you remove it from the deployed application. You can also select a specific route in the tree view and then click the upper-right menu to start, stop, or delete it. To view a graphical diagram of the routes, click Route Diagram . To view inflight and blocked exchanges, click Exchanges . To view endpoints, click Endpoints . You can filter the list by URL, Route ID, and direction. Click Type Converters to view, enable, and disable statistics related to the Camel built-in type conversion mechanism, which is used to convert message bodies and message headers to different types. To interact with a specific route : In the Camel tab's tree view, select a route. To view a list of route attributes and values, click Attributes . To view a graphical representation of the route attributes, click Chart . You can click Edit to select the attributes that you want to see in the chart. To view inflight and blocked exchanges, click Exchanges . Click Operations to view and execute JMX operations on the route, such as dumping the route as XML or getting the route's Camel ID value. To trace messages through a route : In the Camel tab's tree view, select a route. Select Trace, and then click Start tracing . To send messages to a route : In the Camel tab's tree view, open the context's endpoints folder and then select an endpoint. Click the Send subtab. Configure the message in JSON or XML format. Click Send . Return to the route's Trace tab to view the flow of messages through the route. 8.4. Debugging a route In the Camel tab's tree view, select a route. Select Debug , and then click Start debugging . To add a breakpoint, select a node in the diagram and then click Add breakpoint . A red dot appears in the node: The node is added to the list of breakpoints: Click the down arrow to step to the node or the Resume button to resume running the route. Click the Pause button to suspend all threads for the route. Click Stop debugging when you are done. All breakpoints are cleared.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/viewing-and-managing-apache-camel-applications
8.3.5. Starting New Transaction History
8.3.5. Starting New Transaction History Yum stores the transaction history in a single SQLite database file. To start new transaction history, run the following command as root : yum history new This will create a new, empty database file in the /var/lib/yum/history/ directory. The old transaction history will be kept, but will not be accessible as long as a newer database file is present in the directory.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec2-yum-transaction_history-new
Chapter 8. roxctl CLI command reference
Chapter 8. roxctl CLI command reference 8.1. roxctl Display the available commands and optional parameters for roxctl CLI. You must have an account with administrator privileges to use these commands. Usage USD roxctl [command] [flags] Table 8.1. Available commands Command Description central Commands related to the Central service. cluster Commands related to a cluster. collector Commands related to the Collector service. completion Generate shell completion scripts. declarative-config Manage declarative configuration. deployment Commands related to deployments. helm Commands related to Red Hat Advanced Cluster Security for Kubernetes (RHACS) Helm Charts. image Commands that you can run on a specific image. netpol Commands related to network policies. scanner Commands related to the Scanner service. sensor Deploy RHACS services in secured clusters. version Display the current roxctl version. 8.1.1. roxctl command options The roxctl command supports the following options: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. 8.2. roxctl central Commands related to the Central service. Usage USD roxctl central [command] [flags] Table 8.2. Available commands Command Description backup Create a backup of the Red Hat Advanced Cluster Security for Kubernetes (RHACS) database and the certificates. cert Download the certificate chain for the Central service. db Control the database operations. debug Debug the Central service. generate Generate the required YAML configuration files containing the orchestrator objects for the deployment of Central. init-bundles Initialize bundles for Central. login Log in to the Central instance to obtain a token. userpki Manage the user certificate authorization providers. whoami Display information about the current user and their authentication method. 8.2.1. roxctl central command options inherited from the parent command The roxctl central command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl central command. 8.2.2. roxctl central backup Create a backup of the RHACS database and certificates. Usage USD roxctl central backup [flags] Table 8.3. Options Option Description --certs-only Specify to only back up the certificates. When using an external database, this option is used to generate a backup bundle with certificates. The default value is false . --output string Specify where you want to save the backup. The behavior depends on the specified path: If the path is a file path, the backup is written to the file and overwrites it if it already exists. The directory must exist. If the path is a directory, the backup is saved in this directory under the file name that the server specifies. If this argument is omitted, the backup is saved in the current working directory under the file name that the server specifies. -t , --timeout duration Specify the timeout for API requests. It represents the maximum duration of a request. The default value is 1h0m0s . 8.2.3. roxctl central cert Download the certificate chain for the Central service. Usage USD roxctl central cert [flags] Table 8.4. Options Option Description --output string Specify the file name to which you want to save the PEM certificate. You can generate a standard output by using - . The default value is - . --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.2.4. roxctl central login Login to the Central instance to obtain a token. Usage USD roxctl central login [flags] Table 8.5. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 5m0s . 8.2.5. roxctl central whoami Display information about the current user and their authentication method. Usage USD roxctl central whoami [flags] Table 8.6. Options Option Description --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.2.6. roxctl central db Control the database operations. Usage USD roxctl central db [flags] Table 8.7. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1h0m0s . 8.2.6.1. roxctl central db restore Restore the RHACS database from a backup. Usage USD roxctl central db restore <file> [flags] 1 1 For <file> , specify the database backup file that you want to restore. Table 8.8. Options Option Description -f , --force If set to true , the restoration is performed without confirmation. The default value is false . --interrupt If set to true , it interrupts the running restore process to allow it to continue. The default value is false . 8.2.6.2. roxctl central db generate Generate a Central database bundle. Usage USD roxctl central db generate [flags] Table 8.9. Options Option Description --debug If set to true , templates are read from the local file system. The default value is false . --debug-path string Specify the path to the Helm templates in your local file system. For more details, run the roxctl central db generate command. --enable-pod-security-policies If set to true , PodSecurityPolicy resources are created. The default value is true . 8.2.6.3. roxctl central db generate k8s Generate Kubernetes YAML files for deploying Central's database components. Usage USD roxctl central db generate k8s [flags] Table 8.10. Options Option Description --central-db-image string Specify the Central database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --image-defaults string Specify the default settings for container images. It controls the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --output-dir output directory Specify the directory to which you want to save the deployment bundle. The default value is central-db-bundle . 8.2.6.4. roxctl central db restore cancel Cancel the ongoing Central database restore process. Usage USD roxctl central db restore cancel [flags] Table 8.11. Options Option Description f , --force If set to true , proceed with the cancellation without confirmation. The default value is false . 8.2.6.5. roxctl central db restore status Display information about the ongoing database restore process. Usage USD roxctl central db restore status [flags] 8.2.6.6. roxctl central db generate k8s pvc Generate Kubernetes YAML files for persistent volume claims (PVCs) in Central. Usage USD roxctl central db generate k8s pvc [flags] Table 8.12. Options Option Description --name string Specify the external volume name for the Central database. The default value is central-db . --size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 8.2.6.7. roxctl central db generate openshift Generate an OpenShift YAML manifest for deploying a Central database instance on a Red Hat OpenShift cluster. Usage USD roxctl central db generate openshift [flags] Table 8.13. Options Option Description --central-db-image string Specify the Central database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --image-defaults string Specify the default settings for container images. It controls the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --openshift-version int Specify the Red Hat OpenShift major version 3 or 4 for the deployment. The default value is 3 . --output-dir output-directory Specify the directory to which you want to save the deployment bundle. The default value is central-db-bundle . 8.2.6.8. roxctl central db generate k8s hostpath Generate a Kubernetes YAML manifest for a database deployment with a hostpath volume type in Central. Usage USD roxctl central db generate k8s hostpath [flags] Table 8.14. Options Option Description --hostpath string Specify the path on the host. The default value is /var/lib/stackrox-central-db . --node-selector-key string Specify the node selector key. Valid values include kubernetes.io and hostname . --node-selector-value string Specify the node selector value. 8.2.6.9. roxctl central db generate openshift pvc Generate an OpenShift YAML manifest for a database deployment with a persistent volume claim (PVC) in Central. Usage USD roxctl central db generate openshift pvc [flags] Table 8.15. Options Option Description --name string Specify the external volume name for the Central database. The default value is central-db . --size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 8.2.6.10. roxctl central db generate openshift hostpath Add a hostpath external volume to the Central database. Usage USD roxctl central db generate openshift hostpath [flags] Table 8.16. Options Option Description --hostpath string Specify the path on the host. The default value is /var/lib/stackrox-central-db . --node-selector-key string Specify the node selector key. Valid values include kubernetes.io and hostname . --node-selector-value string Specify the node selector value. 8.2.7. roxctl central debug Debug the Central service. Usage USD roxctl central debug [flags] 8.2.7.1. roxctl central debug db Control the debugging of the database. Usage USD roxctl central debug db [flags] Table 8.17. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.2.7.2. roxctl central debug log Retrieve the current log level. Usage USD roxctl central debug log [flags] Table 8.18. Options Option Description -l , --level string Specify the log level to which you want to set the modules. Valid values include Debug , Info , Warn , Error , Panic , and Fatal . -m , --modules strings Specify the modules to which you want to apply the command. --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests, which is the maximum duration of a request. The default value is 1m0s . 8.2.7.3. roxctl central debug dump Download a bundle containing the debug information for Central. Usage USD roxctl central debug dump [flags] Table 8.19. Options Option Description --logs If set to true , logs are included in the Central dump. The default value is false . --output-dir string Specify the output directory for the bundle content. The default value is an automatically generated directory name within the current directory. -t , --timeout duration Specify the timeout for API requests, which is the maximum duration of a request. The default value is 5m0s . 8.2.7.4. roxctl central debug db stats Control the statistics of the Central database. Usage USD roxctl central debug db stats [flags] 8.2.7.5. roxctl central debug authz-trace Enable or disable authorization tracing in Central for debugging purposes. Usage USD roxctl central debug authz-trace [flags] Table 8.20. Options Option Description -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 20m0s . 8.2.7.6. roxctl central debug db stats reset Reset the statistics of the Central database. Usage USD roxctl central debug db stats reset [flags] 8.2.7.7. roxctl central debug download-diagnostics Download a bundle containing a snapshot of diagnostic information about the platform. Usage USD roxctl central debug download-diagnostics [flags] Table 8.21. Options Option Description --clusters strings Specify a comma-separated list of the Sensor clusters from which you want to collect the logs. --output-dir string Specify the output directory in which you want to save the diagnostic bundle. --since string Specify the timestamp from which you want to collect the logs from the Sensor clusters. -t , --timeout duration Specify the timeout for API requests, which specifies the maximum duration of a request. The default value is 5m0s . 8.2.8. roxctl central generate Generate the required YAML configuration files that contain the orchestrator objects to deploy Central. Usage USD roxctl central generate [flags] Table 8.22. Options Option Description --backup-bundle string Specify the path to the backup bundle from which you want to restore the keys and certificates. --debug If set to true , templates are read from the local file system. The default value is false . --debug-path string Specify the path to Helm templates on your local file system. For more details, run the roxctl central generate --help command. --default-tls-certfile Specify the PEM certificate bundle file that you want to use as the default. --default-tls-keyfile Specify the PEM private key file that you want to use as the default. --enable-pod-security-policies If set to true , PodSecurityPolicy resources are created. The default value is true . -p , --password string Specify the administrator password. The default value is automatically generated. --plaintext-endpoints string Specify the ports or endpoints you want to use for unencrypted exposure as a comma-separated list. 8.2.8.1. roxctl central generate k8s Generate the required YAML configuration files to deploy Central into a Kubernetes cluster. Usage USD roxctl central generate k8s [flags] Table 8.23. Options Option Description --central-db-image string Specify the Central database image you want to use. If not specified, a default value corresponding to the --image-defaults is used. --declarative-config-config-maps strings Specify a list of configuration maps that you want to add as declarative configuration mounts in Central. --declarative-config-secrets strings Specify a list of secrets that you want to add as declarative configuration mounts in Central. --enable-telemetry Specify whether you want to enable telemetry. The default value is false . --image-defaults string Specify the default settings for container images. The specified settings control the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --istio-support version Generate deployment files that support the specified Istio version. Valid values include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --lb-type load balancer type Specify the method in which you want to suspend Central. Valid values include lb , np and none . The default value is none . -i , --main-image string Specify the main image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --offline Specify whether you want to run RHACS in offline mode, avoiding a connection to the Internet. The default value is false . --output-dir output directory Specify the directory to which you want to save the deployment bundle. The default value is central-bundle . --output-format output format Specify the deployment tool that you want to use. Valid values include kubectl , helm , and helm-values . The default value is kubectl . --scanner-db-image string Specify the Scanner database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --scanner-image string Specify the Scanner image that you want to use. If not specified, a default value corresponding to the `--image-defaults" is used. 8.2.8.2. roxctl central generate k8s pvc Generate Kubernetes YAML files for persistent volume claims (PVCs) in Central. Usage USD roxctl central generate k8s pvc [flags] Table 8.24. Options Option Description --db-name string Specify the external volume name for the Central database. The default value is central-db . --db-size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --db-storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 8.2.8.3. roxctl central generate openshift Generate the required YAML configuration files to deploy Central in a Red Hat OpenShift cluster. Usage USD roxctl central generate openshift [flags] Table 8.25. Options Option Description --central-db-image string Specify the Central database image that you want to use. If not specified, a default value is created corresponding to the --image-defaults . --declarative-config-config-maps strings Specify a list of configuration maps that you want to add as declarative configuration mounts in Central. --declarative-config-secrets strings Specify a list of secrets that you want to add as declarative configuration mounts in Central. --enable-telemetry Specify whether you want to enable telemetry. The default value is false . --image-defaults string Specify the default settings for container images. It controls the repositories from which the images are downloaded, the image names and the format of the tags. The default value is development_build . --istio-support version Generate deployment files that support the specified Istio version. Valid values include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --lb-type load balancer type Specify the method of exposing Central. Valid values include route , lb , np and none . The default value is none . -i , --main-image string Specify the main image that you want to use. If not specified, a default value corresponding to --image-defaults is used. --offline Specify whether you want to run RHACS in offline mode, avoiding a connection to the Internet. The default value is false . --openshift-monitoring false|true|auto[=true] Specify integration with Red Hat OpenShift 4 monitoring. The default value is auto . --openshift-version int Specify the Red Hat OpenShift major version 3 or 4 for the deployment. --output-dir output directory Specify the directory to which you want to save the deployment bundle. The default value is central-bundle . --output-format output format Specify the deployment tool that you want to use. Valid values include kubectl , helm and helm-values . The default value is kubectl . --scanner-db-image string Specify the Scanner database image that you want to use. If not specified, a default value corresponding to the --image-defaults is used. --scanner-image string Specify the Scanner image that you want to use. If not specified, a default value corresponding to --image-defaults is used. 8.2.8.4. roxctl central generate interactive Generate interactive resources in Central. Usage USD roxctl central generate interactive [flags] 8.2.8.5. roxctl central generate k8s hostpath Generate a Kubernetes YAML manifest for deploying a Central instance by using the hostpath volume type. Usage USD roxctl central generate k8s hostpath [flags] Table 8.26. Options Option Description --db-hostpath string Specify the path on the host for the Central database. The default value is /var/lib/stackrox-central . --db-node-selector-key string Specify the node selector key for the Central database. Valid values include kubernetes.io and hostname . --db-node-selector-value string Specify the node selector value for the Central database. 8.2.8.6. roxctl central generate openshift pvc Generate a OpenShift YAML manifest for deploying a persistent volume claim (PVC) in Central. Usage USD roxctl central generate openshift pvc [flags] Table 8.27. Options Option Description --db-name string Specify the external volume name for the Central database. The default value is central-db . --db-size uint32 Specify the external volume size in gigabytes for the Central database. The default value is 100 . --db-storage-class string Specify the storage class name for the Central database. This is optional if you have a default storage class configured. 8.2.8.7. roxctl central generate openshift hostpath Add a hostpath external volume to the deployment definition in Red Hat OpenShift. Usage USD roxctl central generate openshift hostpath [flags] Table 8.28. Options Option Description --db-hostpath string Specify the path on the host for the Central database. The default value is /var/lib/stackrox-central . --db-node-selector-key string Specify the node selector key. Valid values include kubernetes.io and hostname for the Central database. --db-node-selector-value string Specify the node selector value for the Central database. 8.2.9. roxctl central init-bundles Initialize bundles in Central. Usage USD roxctl central init-bundles [flag] Table 8.29. Options Option Description --retry-timeout duration Specify the timeout after which API requests are retried. A value of 0s means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.2.9.1. roxctl central init-bundles list List the available initialization bundles in Central. Usage USD roxctl central init-bundles list [flags] 8.2.9.2. roxctl central init-bundles revoke Revoke one or more cluster initialization bundles in Central. Usage USD roxctl central init-bundles revoke <init_bundle_ID or name> [<init_bundle_ID or name> ...] [flags] 1 1 For <init_bundle_ID or name> , specify the ID or the name of the initialization bundle that you want to revoke. You can provide multiple IDs or names separated by using spaces. 8.2.9.3. roxctl central init-bundles fetch-ca Fetch the certificate authority (CA) bundle from Central. Usage USD roxctl central init-bundles fetch-ca [flags] Table 8.30. Options Option Description --output string Specify the file that you want to use for storing the CA configuration. 8.2.9.4. roxctl central init-bundles generate Generate a new cluster initialization bundle. Usage USD roxctl central init-bundles generate <init_bundle_name> [flags] 1 1 For <init_bundle_name> , specify the name for the initialization bundle you want to generate. Table 8.31. Options Option Description --output string Specify the file you want to use for storing the newly generated initialization bundle in the Helm configuration form. You can generate a standard output by using - . --output-secrets string Specify the file that you want to use for storing the newly generated initialization bundle in Kubernetes secret form. You can generate a standard by using - . 8.2.10. roxctl central userpki Manage the user certificate authorization providers. Usage USD roxctl central userpki [flags] 8.2.10.1. roxctl central userpki list Display all the user certificate authentication providers. Usage USD roxctl central userpki list [flags] Table 8.32. Options Option Description -j , --json Enable the JSON output. The default value is false . --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.2.10.2. roxctl central userpki create Create a new user certificate authentication provider. Usage USD roxctl central userpki create name [flags] Table 8.33. Options Option Description -c , --cert strings Specify the PEM files of the root CA certificates. You can specify several certificate files. --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -r , --role string Specify the minimum access role for users of this provider. -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.2.10.3. roxctl central userpki delete Delete a user certificate authentication provider. Usage USD roxctl central userpki delete id|name [flags] Table 8.34. Options Option Description -f , --force If set to true , proceed with the deletion without confirmation. The default value is false . --retry-timeout duration Specify the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Specify the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.3. roxctl cluster Commands related to a cluster. Usage USD roxctl cluster [command] [flags] Table 8.35. Available commands Command Description delete Remove Sensor from Central. Table 8.36. Options Option Description --retry-timeout duration Set the retry timeout for API requests. A value of zero means the full request duration is awaited without retry. The default value is 20s . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.3.1. roxctl cluster command options inherited from the parent command The roxctl cluster command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl cluster command. 8.3.2. roxctl cluster delete Remove Sensor from Central. Usage USD roxctl cluster delete [flags] Table 8.37. Options Option Description --name string Specify the cluster name to delete. 8.4. roxctl collector Commands related to the Collector service. Usage USD roxctl collector [command] [flags] Table 8.38. Available commands Command Description support-packages Upload support packages for Collector. 8.4.1. roxctl collector command options inherited from the parent command The roxctl collector command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl collector command. 8.4.2. roxctl collector support-packages Upload support packages for Collector. Note Support packages are deprecated and have no effect on secured clusters running version 4.5 or later. Support package uploads only affect secured clusters on version 4.4 and earlier. Usage USD roxctl collector support-packages [flags] 8.4.2.1. roxctl collector support-packages upload Upload files from a Collector support package to Central. Usage USD roxctl collector support-packages upload [flags] Table 8.39. Options Option Description --overwrite Specify whether you want to overwrite existing but different files. The default value is false . --retry-timeout duration Set the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Set the timeout for API requests. This option represents the maximum duration of a request. The default value is 1m0s . 8.5. roxctl completion Generate shell completion scripts. Usage USD roxctl completion [bash|zsh|fish|powershell] Table 8.40. Supported shell types Shell type Description bash Generate a completion script for the Bash shell. zsh Generate a completion script for the Zsh shell. fish Generate a completion script for the Fish shell. powershell Generate a completion script for the PowerShell shell. 8.5.1. roxctl completion command options inherited from the parent command The roxctl completion command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. 8.6. roxctl declarative-config Manage the declarative configuration. Usage USD roxctl declarative-config [command] [flags] Table 8.41. Available commands Command Description create Create declarative configurations. lint Lint an existing declarative configuration YAML file. 8.6.1. roxctl declarative-config command options inherited from the parent command The roxctl declarative-config command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl declarative-config command. 8.6.2. roxctl declarative-config lint Lint an existing declarative configuration YAML file. Usage USD roxctl declarative-config lint [flags] Table 8.42. Options Option Description --config-map string Read the declarative configuration from the --config-map string . If not specified, the configuration is read from the YAML file specified by using the --file flag. -f , --file string File containing the declarative configuration in YAML format. --namespace string Read the declarative configuration from the --namespace string of the configuration map. If not specified, the namespace specified in the current Kubernetes configuration context is used. --secret string Read the declarative configuration from the specified --secret string . If not specified, the configuration is read from the YAML file specified by using the --file flag. 8.6.3. roxctl declarative-config create Create declarative configurations. Usage USD roxctl declarative-config create [flags] Table 8.43. Options Option Description --config-map string Write the declarative configuration YAML in the configuration map. If not specified and the --secret flag is also not specified, the generated YAML is printed in the standard output format. --namespace string Required if you want to write the declarative configuration YAML to a configuration map or secret. If not specified, the default namespace in the current Kubernetes configuration is used. --secret string Write the declarative configuration YAML in the Secret. You must use secrets for sensitive data. If not specified and the --config-map flag is also not specified, the generated YAML is printed in the standard output format. 8.6.3.1. roxctl declarative-config create role Create a declarative configuration for a role. Usage USD roxctl declarative-config create role [flags] Table 8.44. Options Option Description --access-scope string By providing the name, you can specify the referenced access scope. --description string Set a description for the role. --name string Specify the name of the role. --permission-set string By providing the name, you can specify the referenced permission set. 8.6.3.2. roxctl declarative-config create notifier Create a declarative configuration for a notifier. Usage USD roxctl declarative-config create notifier [flags] Table 8.45. Options Option Description --name string Specify the name of the notifier. 8.6.3.3. roxctl declarative-config create access-scope Create a declarative configuration for an access scope. Usage USD roxctl declarative-config create access-scope [flags] Table 8.46. Options Option Description --cluster-label-selector requirement Specify the criteria for creating a label selector based on the cluster's labels. The key-value pairs represent requirements, and you can use this flag multiple times to create a combination of requirements. The default value is [ [ ] ] . For more details, run the roxctl declarative-config create access-scope --help command. --description string Set a description for the access scope. --included included-object Specify a list of clusters and their namespaces that you want to include in the access scope. The default value is [null] . --name string Specify the name of the access scope. --namespace-label-selector requirement Specify the criteria for creating a label selector based on the namespace's labels. Similar to the cluster-label-selector, you can use this flag multiple times for the combination of requirements. For more details, run the roxctl declarative-config create access-scope --help command. 8.6.3.4. roxctl declarative-config create auth-provider Create a declarative configuration for an authentication provider. Usage USD roxctl declarative-config create auth-provider [flags] Table 8.47. Options Option Description --extra-ui-endpoints strings Specify additional user interface (UI) endpoints from which the authentication provider is used. The expected format is <endpoint>:<port> . --groups-key strings Set the keys of the groups that you want to add within the authentication provider. The tuples of key, value and role should have the same length. For more details, run the roxctl declarative-config create auth-provider --help command. --groups-role strings Set the role of the groups that you want to add within the authentication provider. The tuples of key, value and role should have the same length. For more details, run the roxctl declarative-config create auth-provider --help command. --groups-value strings Set the values of the groups that you want to add within the authentication provider. The tuples of key, value and role should have the same length. For more details, run the roxctl declarative-config create auth-provider --help command. --minimum-access-role string Set the minimum access role of the authentication provider. You can leave this field empty if you do not want to configure the minimum access role by using the declarative configuration. --name string Specify the name of the authentication provider. --required-attributes stringToString Set a list of attributes that the authentication provider must return during authentication. The default value is [] . --ui-endpoint string Set the UI endpoint from which the authentication provider is used. This is usually the public endpoint where RHACS is available. The expected format is <endpoint>:<port> . 8.6.3.5. roxctl declarative-config create permission-set Create a declarative configuration for a permission set. Usage USD roxctl declarative-config create permission-set [flags] Table 8.48. Options Option Description --description string Set the description of the permission set. --name string Specify the name of the permission set. --resource-with-access stringToString Set a list of resources with their respective access levels. The default value is [] . For more details, run the roxctl declarative-config create permission-set --help command. 8.6.3.6. roxctl declarative-config create notifier splunk Create a declarative configuration for a splunk notifier. Usage USD roxctl declarative-config create notifier splunk [flags] Table 8.49. Options Option Description --audit-logging Enable audit logging. The default value is false . --source-types stringToString Specify Splunk source types as comma-separated key=value pairs. The default value is [] . --splunk-endpoint string Specify the Splunk HTTP endpoint. This is a mandatory option. --splunk-skip-tls-verify Use an insecure connection to Splunk. The default value is false . --splunk-token string Specify the Splunk HTTP token. This is a mandatory option. --truncate int Specify the Splunk truncate limit. The default value is 10000 . 8.6.3.7. roxctl declarative-config create notifier generic Create a declarative configuration for a generic notifier. Usage USD roxctl declarative-config create notifier generic [flags] Table 8.50. Options Option Description --audit-logging Enable audit logging. The default value is false . --extra-fields stringToString Specify additional fields as comma-separated key=value pairs. The default value is [] . --headers stringToString Specify headers as comma-separated key=value pairs. The default value is [] . --webhook-cacert-file string Specify the file name of the endpoint CA certificate in PEM format. --webhook-endpoint string Specify the URL of the webhook endpoint. --webhook-password string Specify the password for basic authentication of the webhook endpoint. No authentication if not specified. Requires --webhook-username . --webhook-skip-tls-verify Skip webhook TLS verification. The default value is false . --webhook-username string Specify the username for basic authentication of the webhook endpoint. No authentication occurs if not specified. Requires --webhook-password . 8.6.3.8. roxctl declarative-config create auth-provider iap Create a declarative configuration for an authentication provider with the identity-aware proxy (IAP) identifier. Usage USD roxctl declarative-config create auth-provider iap [flags] Table 8.51. Options Option Description --audience string Specify the target group that you want to validate. 8.6.3.9. roxctl declarative-config create auth-provider oidc Create a declarative configuration for an OpenID Connect (OIDC) authentication provider. Usage USD roxctl declarative-config create auth-provider oidc [flags] Table 8.52. Options Option Description --claim-mappings stringToString Specify a list of non-standard claims from the identity provider (IdP) token that you want to include in the authentication provider's rules. The default value is [] . --client-id string Specify the client ID of the OIDC client. --client-secret string Specify the client secret of the OIDC client. --disable-offline-access Disable the request for the offline_access from the OIDC IdP. You need to use this option if the OIDC IdP limits the number of sessions with the offline_access scope. The default value is false . --issuer string Specify the issuer of the OIDC client. --mode string Specify the callback mode that you want to use. Valid values include auto , post , query and fragment . The default value is auto . 8.6.3.10. roxctl declarative-config create auth-provider saml Create a declarative configuration for a SAML authentication provider. Usage USD roxctl declarative-config create auth-provider saml [flags] Table 8.53. Options Option Description --idp-cert string Specify the file containing the SAML identity provider (IdP) certificate in PEM format. --idp-issuer string Specify the issuer of the IdP. --metadata-url string Specify the metadata URL of the service provider. --name-id-format string Specify the format of the name ID. --sp-issuer string Specify the issuer of the service provider. --sso-url string Specify the URL of the IdP for single sign-on (SSO). 8.6.3.11. roxctl declarative-config create auth-provider userpki Create a declarative configuration for an user PKI authentication provider. Usage USD roxctl declarative-config create auth-provider userpki [flags] Table 8.54. Options Option Description --ca-file string Specify the file containing the certification authorities in PEM format. 8.6.3.12. roxctl declarative-config create auth-provider openshift-auth Create a declarative configuration for an OpenShift Container Platform OAuth authentication provider. Usage USD roxctl declarative-config create auth-provider openshift-auth [flags] 8.7. roxctl deployment Commands related to deployments. Usage USD roxctl deployment [command] [flags] Table 8.55. Available commands Command Description check Check the deployments for violations of the deployment time policy. Table 8.56. Options Option Description -t , --timeout duration Set the timeout for API requests. This option represents the maximum duration of a request. The default value is 10m0s . 8.7.1. roxctl deployment command options inherited from the parent command The roxctl deployment command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl deployment command. 8.7.2. roxctl deployment check Check deployments for violations of the deployment time policy. Usage USD roxctl deployment check [flags] Table 8.57. Options Option Description -c , --categories strings Define the policy categories that you want to execute. By default, all policy categories are executed. --cluster string Set the cluster name or ID that you want to use as the context for the evaluation to enable extended deployments with cluster-specific information. --compact-output Print the JSON output in compact form. The default value is false . -f , --file stringArray Specify the YAML files to send to Central for policy evaluation. --force Bypass the Central cache for images and force a new pull from Scanner. The default value is false . --headers strings Define headers that you want to print in the tabular output. The default values include POLICY , SEVERITY , BREAKS DEPLOY , DEPLOYMENT , DESCRIPTION , VIOLATION , and REMEDIATION . --headers-as-comments Print headers as comments in the CSV tabular output. The default value is false . --junit-suite-name string Set the name of the JUnit test suite. The default value is deployment-check . --merge-output Merge duplicate cells in the tabular output. The default value is false . -n , --namespace string Specify a namespace to enhance deployments with context information such as network policies, RBACs and services for deployments that do not have a namespace in their specification. The namespace defined in the specification is not changed. The default value is default . --no-header Do not print headers for a tabular output. The default value is false . -o , --output string Choose the output format. Output formats include json , junit , sarif , table , and csv . The default value is table . -r , --retries int Set the number of retries before exiting as an error. The default value is 3 . -d , --retry-delay int Set the time to wait between retries in seconds. The default value is 3 . --row-jsonpath-expressions string Define the JSON path expressions to create a row from the JSON object. For more details, run the roxctl deployment check --help command. 8.8. roxctl helm Commands related to Red Hat Advanced Cluster Security for Kubernetes (RHACS) Helm Charts. Usage USD roxctl helm [command] [flags] Table 8.58. Available commands Command Description derive-local-values Derive local Helm values from the cluster configuration. output Output a Helm chart. 8.8.1. roxctl helm command options inherited from the parent command The roxctl helm command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl helm command. 8.8.2. roxctl helm output Output a Helm chart. Usage USD roxctl helm output <central_services or secured_cluster_services> [flags] 1 1 For <central_services or secured_cluster_services> , specify the path to either the central services or the secured cluster services to generate a Helm chart output. Table 8.59. Options Option Description --debug Read templates from the local filesystem. The default value is false . --debug-path string Specify the path to the Helm templates on your local filesystem. For more details, run the roxctl helm output --help command. --image-defaults string Set the default container image settings. Image settings include development_build , stackrox.io , rhacs , and opensource . It influences repositories for image downloads, image names, and tag formats. The default value is development_build . --output-dir string Define the path to the output directory for the Helm chart. The default path is ./stackrox-<chart name>-chart . --remove Remove the output directory if it already exists. The default value is false . 8.8.3. roxctl helm derive-local-values Derive local Helm values from the cluster configuration. Usage USD roxctl helm derive-local-values --output <path> \ 1 <central_services> [flags] 2 1 For the <path> , specify the path where you want to save the generated local values file. 2 For the <central_services> , specify the path to the central services configuration file. Table 8.60. Options Option Description --input string Specify the path to the file or directory containing the YAML input. --output string Define the path to the output file. --output-dir string Define the path to the output directory. --retry-timeout duration Set the timeout after which API requests are retried. The timeout value indicates that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.9. roxctl image Commands that you can run on a specific image. Usage USD roxctl image [command] [flags] Table 8.61. Available commands Command Description check Check images for build time policy violations, and report them. scan Scan the specified image, and return the scan results. Table 8.62. Options -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 10m0s . 8.9.1. roxctl image command options inherited from the parent command The roxctl image command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl image command. 8.9.2. roxctl image scan Scan the specified image, and return the scan results. Usage USD roxctl image scan [flags] Table 8.63. Options Option Description --cluster string Specify the cluster name or ID to which you want to delegate the image scan. --compact-output Print JSON output in a compact format. The default value is false . --fail Fail if vulnerabilities have been found. The default value is false . -f , --force Ignore Central's cache and force a fresh re-pull from Scanner. The default value is false . --headers strings Specify the headers to print in a tabular output. The default values include COMPONENT , VERSION , CVE , SEVERITY , and LINK . --headers-as-comments Print headers as comments in a CSV tabular output. The default value is false . -i , --image string Specify the image name and reference to scan. For example, nginx:latest or nginx@sha256:... . -a , --include-snoozed Include snoozed and unsnoozed CVEs in the scan results. The default value is false . --merge-output Merge duplicate cells in a tabular output. The default value is true . --no-header Do not print headers for a tabular output. The default value is false . -o , --output string Specify the output format. Output formats include table , csv , json , and sarif . -r , --retries int Specify the number of retries before exiting as an error. The default value is 3 . -d , --retry-delay int Set the time to wait between retries in seconds. The default value is 3 . --row-jsonpath-expressions string Specify JSON path expressions to create a row from the JSON object. For more details, run the roxctl image scan --help command. --severity strings List of severities to include in the output. Use this to filter for specific severities. The default values include LOW , MODERATE , IMPORTANT , and CRITICAL . 8.9.3. roxctl image check Check images for build time policy violations, and report them. Usage USD roxctl image check [flags] Table 8.64. Options Option Description -c , --categories strings List of the policy categories that you want to execute. By default, all the policy categories are used. --cluster string Define the cluster name or ID that you want to use as the context for evaluation. --compact-output Print JSON output in a compact format. The default value is false . -f , --force Bypass the Central cache for the image and force a new pull from the Scanner. The default value is false . --headers strings Define headers to print in a tabular output. The default values include POLICY , SEVERITY , BREAKS BUILD , DESCRIPTION , VIOLATION , and REMEDIATION . --headers-as-comments Print headers as comments in a CSV tabular output. The default value is false . -i , --image string Specify the image name and reference. For example, nginx:latest or nginx@sha256:... ) . --junit-suite-name string Set the name of the JUnit test suite. Default value is image-check . --merge-output Merge duplicate cells in a tabular output. The default value is false . --no-header Do not print headers for a tabular output. The default value is false . -o , --output string Choose the output format. Output formats include junit , sarif , table , csv , and json . The default value is table . -r , --retries int Set the number of retries before exiting as an error. The default value is 3 . -d , --retry-delay int Set the time to wait between retries in seconds. The default value is 3 . --row-jsonpath-expressions string Create a row from the JSON object by using JSON path expression. For more details, run the roxctl image check --help command. --send-notifications Define whether you want to send notifications in the event of violations. The default value is false . 8.10. roxctl netpol Commands related to the network policies. Usage USD roxctl netpol [command] [flags] Table 8.65. Available commands Command Description connectivity Connectivity analysis of the network policy resources. generate Recommend network policies based on the deployment information. 8.10.1. roxctl netpol command options inherited from the parent command The roxctl netpol command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl netpol command. 8.10.2. roxctl netpol generate Recommend network policies based on the deployment information. Usage USD roxctl netpol generate <folder_path> [flags] 1 1 For <folder_path> , specify the path to the directory containing your Kubernetes deployment and service configuration files. Table 8.66. Options Option Description --dnsport uint16 Specify the DNS port that you want to use in the egress rules of synthesized network policies. The default value is 53 . --fail Fail on the first encountered error. The default value is false . -d , --output-dir string Save generated policies into the target folder. -f , --output-file string Save and merge generated policies into a single YAML file. --remove Remove the output path if it already exists. The default value is false . --strict Treat warnings as errors. The default value is false . 8.10.3. roxctl netpol connectivity Commands related to the connectivity analysis of the network policy resources. Usage USD roxctl netpol connectivity [flags] 8.10.3.1. roxctl netpol connectivity map Analyze connectivity based on the network policies and other resources. Usage USD roxctl netpol connectivity map <folder_path> [flags] 1 1 For <folder_path> , specify the path to the directory containing your Kubernetes deployment and service configuration files. Table 8.67. Options Option Description --exposure Enhance the analysis of permitted connectivity by using exposure analysis. The default value is false . --fail Fail on the first encountered error. The default value is false . --focus-workload string Focus on connections of the specified workload name in the output. -f , --output-file string Save the connections list output into a specific file. -o , --output-format string Configure the connections list in a specific format. Supported formats include txt , json , md , dot , and csv . The default value is txt . --remove Remove the output path if it already exists. The default value is false . --save-to-file Define whether you want to save the output of the connection list in the default file. The default value is false . --strict Treat warnings as errors. The default value is false . 8.10.3.2. roxctl netpol connectivity diff Report connectivity differences based on two network policy directories and YAML manifests with workload resources. Usage USD roxctl netpol connectivity diff [flags] Table 8.68. Options Option Description --dir1 string Specify the first directory path of the input resources. This value is mandatory. --dir2 string Specify the second directory path of the input resources that you want to compare with the first directory path. This value is mandatory. --fail Fail on the first encounter. The default value is false . -f , --output-file string Save the output of the connectivity difference command into a specific file. -o , --output-format string Configure the output of the connectivity difference command in a specific format. Supported formats include txt , md , csv . The default value is txt .. --remove Remove the output path if it already exists. The default value is false . --save-to-file Define whether you want to store the output of the connectivity differences in the default file. The default value is false . --strict Treat warnings as errors. The default value is false . 8.11. roxctl scanner Commands related to the StackRox Scanner and Scanner V4 services. Usage USD roxctl scanner [command] [flags] Table 8.69. Available commands Command Description download-db Download the offline vulnerability database for StackRox Scanner and Scanner V4. generate Generate the required YAML configuration files to deploy the StackRox Scanner and Scanner V4. upload-db Upload a vulnerability database for the StackRox Scanner and Scanner V4. 8.11.1. roxctl scanner command options inherited from the parent command The roxctl scanner command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl scanner command. 8.11.2. roxctl scanner generate Generate the required YAML configuration files to deploy Scanner. Usage USD roxctl scanner generate [flags] Table 8.70. Options Option Description --cluster-type cluster type Specify the type of cluster on which you want to run Scanner. Cluster types include k8s and openshift . The default value is k8s . --enable-pod-security-policies Create PodSecurityPolicy resources. The default value is true . --istio-support string Generate deployment files that support the specified Istio version. Valid versions include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --output-dir string Specify the output directory for the Scanner bundle. Leave blank to use the default value. --retry-timeout duration Set the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . --scanner-image string Specify the Scanner image that you want to use. Leave blank to use the server default. -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.11.3. roxctl scanner upload-db Upload a vulnerability database for Scanner. Usage USD roxctl scanner upload-db [flags] Table 8.71. Options Option Description --scanner-db-file string Specify the file containing the dumped Scanner definitions DB. -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 10m0s . 8.11.4. roxctl scanner download-db Download the offline vulnerability database for StackRox Scanner or Scanner V4. This command downloads version-specific offline vulnerability bundles. The system contacts Central to determine the version if one is not specified. If communication fails, the download defaults to the version embedded within roxctl . By default, it will attempt to download the database for the determined version and less-specific variants. For example, if version 4.4.1-extra is specified, downloads will be attempted for the following version variants: 4.4.1-extra 4.4.1 4.4 Usage USD roxctl scanner download-db [flags] Table 8.72. Options Option Description --force Force overwriting the output file if it already exists. The default value is false . --scanner-db-file string Output file to save the vulnerability database to. The default value is the name and path of the remote file that is downloaded. --skip-central Do not contact Central when detecting the version. The default value is false . --skip-variants Do not attempt to process variants of the determined version. The default value is false . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 10m0s . --version string Download a specific version or version variant of the vulnerability database. By default, the version is automatically detected. 8.12. roxctl sensor Deploy Red Hat Advanced Cluster Security for Kubernetes (RHACS) services in secured clusters. Usage USD roxctl sensor [command] [flags] Table 8.73. Available commands Command Description generate Generate files to deploy RHACS services in secured clusters. generate-certs Download a YAML file with renewed certificates for Sensor, Collector, and Admission controller. get-bundle Download a bundle with the files to deploy RHACS services in a cluster. Table 8.74. Options Option Description --retry-timeout duration Set the timeout after which API requests are retried. A value of zero means that the entire request duration is waited for without retrying. The default value is 20s . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 1m0s . 8.12.1. roxctl sensor command options inherited from the parent command The roxctl sensor command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable. Note These options are applicable to all the sub-commands of the roxctl sensor command. 8.12.2. roxctl sensor generate Generate files to deploy RHACS services in secured clusters. Usage USD roxctl sensor generate [flags] Table 8.75. Options Option Description --admission-controller-disable-bypass Disable the bypass annotations for the admission controller. The default value is false . --admission-controller-enforce-on-creates Dynamic enable for enforcing on object creation in the admission controller. The default value is false . --admission-controller-enforce-on-updates Enable dynamic enforcement of object updates in the admission controller. The default value is false . --admission-controller-listen-on-creates Configure the admission controller webhook to listen to deployment creation. The default value is false . --admission-controller-listen-on-updates Configure the admission controller webhook to listen to deployment updates. The default value is false . --admission-controller-scan-inline Get scans inline when using the admission controller. The default value is false . --admission-controller-timeout int32 Set the timeout in seconds for the admission controller. The default value is 3 . --central string Set the endpoint to which you want to connect Sensor. The default value is central.stackrox:443 . --collection-method collection method Specify the collection method that you want to use for runtime support. Collection methods include none , default , ebpf and core_bpf . The default value is default . --collector-image-repository string Set the image repository that you want to use to deploy Collector. If not specified, a default value corresponding to the effective --main-image repository value is derived. --continue-if-exists Continue with downloading the sensor bundle even if the cluster already exists. The default value is false . --create-upgrader-sa Decide whether to create the upgrader service account with cluster-admin privileges to facilitate automated sensor upgrades. The default value is true . --disable-tolerations Disable tolerations for tainted nodes. The default value is false . --enable-pod-security-policies Create PodSecurityPolicy resources. The default value is true . --istio-support string Generate deployment files that support the specified Istio version. Valid versions include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , 1.7 . --main-image-repository string Specify the image repository that you want to use to deploy Sensor. If not specified, a default value is used. --name string Set the cluster name to identify the cluster. --output-dir string Set the output directory for the bundle contents. The default value is an automatically generated directory name inside the current directory. --slim-collector string[="true"] Use Collector-slim in the deployment bundle. Valid values include auto , true , and false . The default value is auto . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 5m0s . 8.12.2.1. roxctl sensor generate k8s Generate the required files to deploy RHACS services in a Kubernetes cluster. Usage USD roxctl sensor generate k8s [flags] Table 8.76. Options Option Description --admission-controller-listen-on-events Enable admission controller webhook to listen to Kubernetes events. The default value is true . 8.12.2.2. roxctl sensor generate openshift Generate the required files to deploy RHACS services in a Red Hat OpenShift cluster. Usage USD roxctl sensor generate openshift [flags] Table 8.77. Options Option Description `--admission-controller-listen-on-events false true auto[=true]` Enable or disable the admission controller webhook to listen to Kubernetes events . The default value is auto . `--disable-audit-logs false true auto[=true]` Enable or disable audit log collection for runtime detection. The default value is auto . --openshift-version int Specify the Red Hat OpenShift major version for which you want to generate the deployment files. 8.12.3. roxctl sensor get-bundle Download a bundle with the files to deploy RHACS services into a cluster. Usage USD roxctl sensor get-bundle <cluster_details> [flags] 1 1 For <cluster_details> , specify the cluster name or ID. Table 8.78. Options Option Description --create-upgrader-sa Specify whether to create the upgrader service account with cluster-admin privileges for automated Sensor upgrades. The default value is true . --istio-support string Generate deployment files that support the specified Istio version. Valid versions include 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , and 1.7 . --output-dir string Specify the output directory for the bundle contents. The default value is an automatically generated directory name inside the current directory. --slim-collector string[="true"] Use Collector-slim in the deployment bundle. Valid values include auto , true and false . The default value is auto . -t , --timeout duration Set the timeout for API requests representing the maximum duration of a request. The default value is 5m0s . 8.12.4. roxctl sensor generate-certs Download a YAML file with renewed certificates for Sensor, Collector, and Admission controller. Usage USD roxctl sensor generate-certs <cluster_details> [flags] 1 1 For <cluster_details> , specify the cluster name or ID. Table 8.79. Options Option Description --output-dir string Specify the output directory for the YAML file. The default value is . . 8.13. roxctl version Display the current roxctl version. Usage USD roxctl version [flags] 8.13.1. roxctl version command options The roxctl version command supports the following option: Option Description --json Display the extended version information as JSON. The default value is false . 8.13.2. roxctl version command options inherited from the parent command The roxctl version command supports the following options inherited from the parent roxctl command: Option Description --ca string Specify a custom CA certificate file path for secure connections. Alternatively, you can specify the file path by using the ROX_CA_CERT_FILE environment variable. --direct-grpc Set --direct-grpc for improved connection performance. Alternatively, by setting the ROX_DIRECT_GRPC_CLIENT environment variable to true , you can enable direct gRPC . The default value is false . -e , --endpoint string Set the endpoint for the service to contact. Alternatively, you can set the endpoint by using the ROX_ENDPOINT environment variable. The default value is localhost:8443 . --force-http1 Force the use of HTTP/1 for all connections. Alternatively, by setting the ROX_CLIENT_FORCE_HTTP1 environment variable to true , you can force the use of HTTP/1. The default value is false . --insecure Enable insecure connection options. Alternatively, by setting the ROX_INSECURE_CLIENT environment variable to true , you can enable insecure connection options. The default value is false . --insecure-skip-tls-verify Skip the TLS certificate validation. Alternatively, by setting the ROX_INSECURE_CLIENT_SKIP_TLS_VERIFY environment variable to true , you can skip the TLS certificate validation. The default value is false . --no-color Disable the color output. Alternatively, by setting the ROX_NO_COLOR environment variable to true , you can disable the color output. The default value is false . -p , --password string Specify the password for basic authentication. Alternatively, you can set the password by using the ROX_ADMIN_PASSWORD environment variable. --plaintext Use an unencrypted connection. Alternatively, by setting the ROX_PLAINTEXT environment variable to true , you can enable an unencrypted connection. The default value is false . -s , --server-name string Set the TLS server name to use for SNI. Alternatively, you can set the server name by using the ROX_SERVER_NAME environment variable. --token-file string Use the API token provided in the specified file for authentication. Alternatively, you can set the token by using the ROX_API_TOKEN environment variable.
[ "roxctl [command] [flags]", "roxctl central [command] [flags]", "roxctl central backup [flags]", "roxctl central cert [flags]", "roxctl central login [flags]", "roxctl central whoami [flags]", "roxctl central db [flags]", "roxctl central db restore <file> [flags] 1", "roxctl central db generate [flags]", "roxctl central db generate k8s [flags]", "roxctl central db restore cancel [flags]", "roxctl central db restore status [flags]", "roxctl central db generate k8s pvc [flags]", "roxctl central db generate openshift [flags]", "roxctl central db generate k8s hostpath [flags]", "roxctl central db generate openshift pvc [flags]", "roxctl central db generate openshift hostpath [flags]", "roxctl central debug [flags]", "roxctl central debug db [flags]", "roxctl central debug log [flags]", "roxctl central debug dump [flags]", "roxctl central debug db stats [flags]", "roxctl central debug authz-trace [flags]", "roxctl central debug db stats reset [flags]", "roxctl central debug download-diagnostics [flags]", "roxctl central generate [flags]", "roxctl central generate k8s [flags]", "roxctl central generate k8s pvc [flags]", "roxctl central generate openshift [flags]", "roxctl central generate interactive [flags]", "roxctl central generate k8s hostpath [flags]", "roxctl central generate openshift pvc [flags]", "roxctl central generate openshift hostpath [flags]", "roxctl central init-bundles [flag]", "roxctl central init-bundles list [flags]", "roxctl central init-bundles revoke <init_bundle_ID or name> [<init_bundle_ID or name> ...] [flags] 1", "roxctl central init-bundles fetch-ca [flags]", "roxctl central init-bundles generate <init_bundle_name> [flags] 1", "roxctl central userpki [flags]", "roxctl central userpki list [flags]", "roxctl central userpki create name [flags]", "roxctl central userpki delete id|name [flags]", "roxctl cluster [command] [flags]", "roxctl cluster delete [flags]", "roxctl collector [command] [flags]", "roxctl collector support-packages [flags]", "roxctl collector support-packages upload [flags]", "roxctl completion [bash|zsh|fish|powershell]", "roxctl declarative-config [command] [flags]", "roxctl declarative-config lint [flags]", "roxctl declarative-config create [flags]", "roxctl declarative-config create role [flags]", "roxctl declarative-config create notifier [flags]", "roxctl declarative-config create access-scope [flags]", "roxctl declarative-config create auth-provider [flags]", "roxctl declarative-config create permission-set [flags]", "roxctl declarative-config create notifier splunk [flags]", "roxctl declarative-config create notifier generic [flags]", "roxctl declarative-config create auth-provider iap [flags]", "roxctl declarative-config create auth-provider oidc [flags]", "roxctl declarative-config create auth-provider saml [flags]", "roxctl declarative-config create auth-provider userpki [flags]", "roxctl declarative-config create auth-provider openshift-auth [flags]", "roxctl deployment [command] [flags]", "roxctl deployment check [flags]", "roxctl helm [command] [flags]", "roxctl helm output <central_services or secured_cluster_services> [flags] 1", "roxctl helm derive-local-values --output <path> \\ 1 <central_services> [flags] 2", "roxctl image [command] [flags]", "roxctl image scan [flags]", "roxctl image check [flags]", "roxctl netpol [command] [flags]", "roxctl netpol generate <folder_path> [flags] 1", "roxctl netpol connectivity [flags]", "roxctl netpol connectivity map <folder_path> [flags] 1", "roxctl netpol connectivity diff [flags]", "roxctl scanner [command] [flags]", "roxctl scanner generate [flags]", "roxctl scanner upload-db [flags]", "roxctl scanner download-db [flags]", "roxctl sensor [command] [flags]", "roxctl sensor generate [flags]", "roxctl sensor generate k8s [flags]", "roxctl sensor generate openshift [flags]", "roxctl sensor get-bundle <cluster_details> [flags] 1", "roxctl sensor generate-certs <cluster_details> [flags] 1", "roxctl version [flags]" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/roxctl_cli/roxctl-cli-command-reference
Chapter 12. Multiple networks
Chapter 12. Multiple networks 12.1. Understanding multiple networks In Kubernetes, container networking is delegated to networking plug-ins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. 12.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance You can send traffic on two different planes to manage how much traffic is along each plane. Security You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created. 12.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plug-ins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 12.2. Configuring an additional network As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported: Bridge Host device IPVLAN MACVLAN 12.2.1. Approaches to managing an additional network You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plug-in that you configure. For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plug-in that you configure as part of the additional network. The IPAM plug-in supports a variety of IP address assignment approaches including DHCP and static assignment. Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address. Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. This approach allows for the chaining of CNI plug-ins. 12.2.2. Configuration for an additional network attachment An additional network is configured via the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. The configuration for the API is described in the following table: Table 12.1. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plug-in configuration in JSON format. 12.2.2.1. Configuration of an additional network through the Cluster Network Operator The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration. The following YAML describes the configuration parameters for managing an additional network with the CNO: Cluster Network Operator configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw 1 An array of one or more additional network configurations. 2 The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 3 The namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 4 A CNI plug-in configuration in JSON format. 12.2.2.2. Configuration of an additional network from a YAML manifest The configuration for an additional network is specified from a YAML configuration file, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... } 1 The name for the additional network attachment that you are creating. 2 A CNI plug-in configuration in JSON format. 12.2.3. Configurations for additional network types The specific configuration fields for additional networks is described in the following sections. 12.2.3.1. Configuration for a bridge additional network The following object describes the configuration parameters for the bridge CNI plug-in: Table 12.2. Bridge CNI plug-in JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string bridge string Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipam object The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition. ipMasq boolean Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. mtu string Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. 12.2.3.1.1. bridge configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "work-network", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 12.2.3.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plug-in: Table 12.3. Host device CNI plug-in JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plug-in to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . ipam object The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition. 12.2.3.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } 12.2.3.3. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN CNI plug-in: Table 12.4. IPVLAN CNI plug-in JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plug-in to configure: ipvlan . mode string The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. ipam object The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition. Do not specify dhcp . Configuring IPVLAN with DHCP is not supported because IPVLAN interfaces share the MAC address with the host interface. 12.2.3.3.1. ipvlan configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "work-network", "type": "ipvlan", "master": "eth1", "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 12.2.3.4. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the macvlan CNI plug-in: Table 12.5. MACVLAN CNI plug-in JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plug-in to configure: macvlan . mode string Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string The Ethernet, bonded, or VLAN interface to associate with the virtual interface. If a value is not specified, then the host system's primary Ethernet interface is used. mtu string The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. ipam object The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition. 12.2.3.4.1. macvlan configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "dhcp" } } 12.2.4. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plug-in provides IP addresses for other CNI plug-ins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plug-in. 12.2.4.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 12.6. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 12.7. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 12.8. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 12.9. ipam.dns object Field Type Description nameservers array An of array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 12.2.4.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 12.10. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 12.2.4.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plug-in allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 12.11. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero ore more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 12.2.5. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition object automatically. Important Do not edit the NetworkAttachmentDefinition objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: project2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 12.2.6. Creating an additional network attachment by applying a YAML manifest Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 12.3. About virtual routing and forwarding 12.3.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 12.3.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plug-in, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plug-in also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 12.4. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 12.4.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/networks-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 12.4.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift Command-line Interface ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: ' { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/networks-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plug-in configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plug-in: macvlan CNI plug-in JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plug-in configurations. The first object specifies a macvlan plug-in configuration and the second object specifies a tuning plug-in configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plug-in runtime configuration capabilities. 4 Specifies the interface that the macvlan plug-in uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plug-in. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plug-in JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 12.5. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 12.5.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 12.6. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 12.6.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 12.7. Removing an additional network As a cluster administrator you can remove an additional network attachment. 12.7.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 12.8. Assigning a secondary network to a VRF Important CNI VRF plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 12.8.1. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plug-in. The virtual network created by this plug-in is associated with a physical interface that you specify. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. 12.8.1.1. Creating an additional network attachment with the CNI VRF plug-in The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plug-in, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", 2 "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", "vrfname": "example-vrf-name", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verifying that the additional VRF network attachment is successful To verify that the VRF CNI is correctly configured and the additional network attachment is attached, do the following: Create a network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the VRF additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm the VRF interface is master of the secondary interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
[ "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: project2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: ' { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", 2 \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", \"vrfname\": \"example-vrf-name\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/multiple-networks
Chapter 2. sVirt
Chapter 2. sVirt sVirt is a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using guest virtual machines. This integrated technology improves security and hardens the system against bugs in the hypervisor. It is particularly helpful in preventing attacks on the host physical machine or on another guest virtual machine. This chapter describes how sVirt integrates with virtualization technologies in Red Hat Enterprise Linux 6. Non-virtualized Environments In a non-virtualized environment, host physical machines are separated from each other physically and each host physical machine has a self-contained environment, consisting of services such as a web server, or a DNS server. These services communicate directly to their own user space, host physical machine's kernel and physical hardware, offering their services directly to the network. The following image represents a non-virtualized environment: User Space - memory area where all user mode applications and some drivers execute. Web App (web application server) - delivers web content that can be accessed through the a browser. Host Kernel - is strictly reserved for running the host physical machine's privileged kernel, kernel extensions, and most device drivers. DNS Server - stores DNS records allowing users to access web pages using logical names instead of IP addresses. Virtualized Environments In a virtualized environment, several virtual operating systems can run on a single kernel residing on a host physical machine. The following image represents a virtualized environment: 2.1. Security and Virtualization When services are not virtualized, machines are physically separated. Any exploit is usually contained to the affected machine, with the obvious exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system. If there is a security flaw in the hypervisor that can be exploited by a guest virtual machine, this guest virtual machine may be able to not only attack the host physical machine, but also other guest virtual machines running on that host physical machine. These attacks can extend beyond the guest virtual machine and could expose other guest virtual machines to an attack as well. sVirt is an effort to isolate guest virtual machines and limit their ability to launch further attacks if exploited. This is demonstrated in the following image, where an attack cannot break out of the guest virtual machine and invade other guest virtual machines: SELinux introduces a pluggable security framework for virtualized instances in its implementation of Mandatory Access Control (MAC). The sVirt framework allows guest virtual machines and their resources to be uniquely labeled. Once labeled, rules can be applied which can reject access between different guest virtual machines.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-svirt