title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 17. Geo-replication
Chapter 17. Geo-replication Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 17.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 17.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to be accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see * Geo-Replication requires SSL/TLS certificates and keys. For more information, see Proof of concept deployment using SSL/TLS certificates . . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 17.2.1. Enabling storage replication for standalone Red Hat Quay Use the following procedure to enable storage replication on Red Hat Quay. Procedure Update your config.yaml file to include the storage engines to which data will be replicated. You must list all storage engines to be used: # ... FEATURE_STORAGE_REPLICATION: true # ... DISTRIBUTED_STORAGE_CONFIG: usstorage: - RHOCSStorage - access_key: <access_key> bucket_name: <example_bucket> hostname: my.noobaa.hostname is_secure: false port: "443" secret_key: <secret_key> storage_path: /datastorage/registry eustorage: - S3Storage - host: s3.amazon.com port: "443" s3_access_key: <access_key> s3_bucket: <example bucket> s3_secret_key: <secret_key> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage # ... Optional. If complete replication of all images to all storage engines is required, you can replicate images to the storage engine by manually setting the DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS field. This ensures that all images are replicated to that storage engine. For example: # ... DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage # ... Note To enable per-namespace replication, contact Red Hat Quay support. After adding storage and enabling Replicate to storage engine by default for geo-replication, you must sync existing image data across all storage. To do this, you must execute into the container by running the following command: USD podman exec -it <container_id> To sync the content after adding new storage, enter the following commands: # scl enable python27 bash # python -m util.backfillreplication Note This is a one time operation to sync content after adding new storage. 17.2.2. Run Red Hat Quay with storage preferences Copy the config.yaml to all machines running Red Hat Quay For each machine in each region, add a QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable with the preferred storage engine for the region in which the machine is running. For example, for a machine running in Europe with the config directory on the host available from USDQUAY/config : Note The value of the environment variable specified must match the name of a Location ID as defined in the config panel. Restart all Red Hat Quay containers 17.2.3. Removing a geo-replicated site from your standalone Red Hat Quay deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. Complete this step before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to obtain a list of running containers: USD podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay Enter the following command to execute a shell inside of the PostgreSQL container: USD podman exec -it postgresql-quay -- /bin/bash Enter psql by running the following command: bash-4.4USD psql Enter the following command to reveal a list of sites in your geo-replicated deployment: quay=# select * from imagestoragelocation; Example output id | name ----+------------------- 1 | usstorage 2 | eustorage Enter the following command to exit the postgres CLI to re-enter bash-4.4: \q Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. bash-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage 17.2.4. Setting up geo-replication on OpenShift Container Platform Use the following procedure to set up geo-replication on OpenShift Container Platform. Procedure Deploy a postgres instance for Red Hat Quay. Login to the database by entering the following command: psql -U <username> -h <hostname> -p <port> -d <database_name> Create a database for Red Hat Quay named quay . For example: CREATE DATABASE quay; Enable pg_trm extension inside the database \c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm; Deploy a Redis instance: Note Deploying a Redis instance might be unnecessary if your cloud provider has its own service. Deploying a Redis instance is required if you are leveraging Builders. Deploy a VM for Redis Verify that it is accessible from the clusters where Red Hat Quay is running Port 6379/TCP must be open Run Redis inside the instance sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster. Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster. Configure a load balancer to provide a single entry point to the clusters. 17.2.4.1. Configuring geo-replication for the Red Hat Quay on OpenShift Container Platform Use the following procedure to configure geo-replication for the Red Hat Quay on OpenShift Container Platform. Procedure Create a config.yaml file that is shared between clusters. This config.yaml file contains the details for the common PostgreSQL, Redis and storage backends: Geo-replication config.yaml file SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DATABASE_SECRET_KEY: 0ce4f796-c295-415b-bf9d-b315114704b8 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true 1 A proper SERVER_HOSTNAME must be used for the route and must match the hostname of the global load balancer. 2 To retrieve the configuration file for a Clair instance deployed using the OpenShift Container Platform Operator, see Retrieving the Clair config . Create the configBundleSecret by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle In each of the clusters, set the configBundleSecret and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environmental variable override to configure the appropriate storage for that cluster. For example: Note The config.yaml file between both deployments must match. If making a change to one cluster, it must also be changed in the other. US cluster QuayRegistry example apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring TLS and routes . European cluster apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring TLS and routes . 17.2.5. Removing a geo-replicated site from your Red Hat Quay on OpenShift Container Platform deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You are logged into OpenShift Container Platform. You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected. Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status. This step must be completed before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to identify your Quay application pods: USD oc get pod -n <quay_namespace> Example output quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm Enter the following command to open an interactive shell session in the usstorage pod: USD oc rsh quay390usstorage-quay-app-5779ddc886-2drh2 Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. sh-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage 17.3. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication.
[ "FEATURE_STORAGE_REPLICATION: true DISTRIBUTED_STORAGE_CONFIG: usstorage: - RHOCSStorage - access_key: <access_key> bucket_name: <example_bucket> hostname: my.noobaa.hostname is_secure: false port: \"443\" secret_key: <secret_key> storage_path: /datastorage/registry eustorage: - S3Storage - host: s3.amazon.com port: \"443\" s3_access_key: <access_key> s3_bucket: <example bucket> s3_secret_key: <secret_key> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage", "DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage", "podman exec -it <container_id>", "scl enable python27 bash", "python -m util.backfillreplication", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage registry.redhat.io/quay/quay-rhel8:v3.13.3", "python -m util.backfillreplication", "podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay", "podman exec -it postgresql-quay -- /bin/bash", "bash-4.4USD psql", "quay=# select * from imagestoragelocation;", "id | name ----+------------------- 1 | usstorage 2 | eustorage", "\\q", "bash-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage", "psql -U <username> -h <hostname> -p <port> -d <database_name>", "CREATE DATABASE quay;", "\\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;", "sudo dnf install -y podman run -d --name redis -p 6379:6379 redis", "SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DATABASE_SECRET_KEY: 0ce4f796-c295-415b-bf9d-b315114704b8 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true", "oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage", "python -m util.backfillreplication", "oc get pod -n <quay_namespace>", "quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm", "oc rsh quay390usstorage-quay-app-5779ddc886-2drh2", "sh-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/georepl-intro
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes Bring-Your-Own-Host (BYOH) allows for users to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users looking to mitigate major disruptions in the event that a Windows server goes offline. 9.1. Configuring a BYOH Windows instance Creating a BYOH Windows instance requires creating a config map in the Windows Machine Config Operator (WMCO) namespace. Prerequisites Any Windows instances that are to be attached to the cluster as a node must fulfill the following requirements: The instance must be on the same network as the Linux worker nodes in the cluster. Port 22 must be open and running an SSH server. The default shell for the SSH server must be the Windows Command shell , or cmd.exe . Port 10250 must be open for log collection. An administrator user is present with the private key used in the secret set as an authorized SSH key. If you are creating a BYOH Windows instance for an installer-provisioned infrastructure (IPI) AWS cluster, you must add a tag to the AWS instance that matches the spec.template.spec.value.tag value in the compute machine set for your worker nodes. For example, kubernetes.io/cluster/<cluster_id>: owned or kubernetes.io/cluster/<cluster_id>: shared . If you are creating a BYOH Windows instance on vSphere, communication with the internal API server must be enabled. The hostname of the instance must follow the RFC 1123 DNS label requirements, which include the following standards: Contains only lowercase alphanumeric characters or '-'. Starts with an alphanumeric character. Ends with an alphanumeric character. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because the WMCO installs and manages the runtime, it is recommended that you not manually install containerd on nodes. Procedure Create a ConfigMap named windows-instances in the WMCO namespace that describes the Windows instances to be added. Note Format each entry in the config map's data section by using the address as the key while formatting the value as username=<username> . Example config map kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core 1 The address that the WMCO uses to reach the instance over SSH, either a DNS name or an IPv4 address. A DNS PTR record must exist for this address. It is recommended that you use a DNS name with your BYOH instance if your organization uses DHCP to assign IP addresses. If not, you need to update the windows-instances ConfigMap whenever the instance is assigned a new IP address. 2 The name of the administrator user created in the prerequisites. 9.2. Removing BYOH Windows instances You can remove BYOH instances attached to the cluster by deleting the instance's entry in the config map. Deleting an instance reverts that instance back to its state prior to adding to the cluster. Any logs and container runtime artifacts are not added to these instances. For an instance to be cleanly removed, it must be accessible with the current private key provided to WMCO. For example, to remove the 10.1.42.1 instance from the example, the config map would be changed to the following: kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core Deleting windows-instances is viewed as a request to deconstruct all Windows instances added as nodes.
[ "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/byoh-windows-instance
19.2. Email Program Classifications
19.2. Email Program Classifications In general, all email applications fall into at least one of three classifications. Each classification plays a specific role in the process of moving and managing email messages. While most users are only aware of the specific email program they use to receive and send messages, each one is important for ensuring that email arrives at the correct destination. 19.2.1. Mail Transport Agent A Mail Transport Agent ( MTA ) transports email messages between hosts using SMTP . A message may involve several MTAs as it moves to its intended destination. While the delivery of messages between machines may seem rather straightforward, the entire process of deciding if a particular MTA can or should accept a message for delivery is quite complicated. In addition, due to problems from spam, use of a particular MTA is usually restricted by the MTA's configuration or the access configuration for the network on which the MTA resides. Many modern email client programs can act as an MTA when sending email. However, this action should not be confused with the role of a true MTA. The sole reason email client programs are capable of sending email like an MTA is because the host running the application does not have its own MTA. This is particularly true for email client programs on non-UNIX-based operating systems. However, these client programs only send outbound messages to an MTA they are authorized to use and do not directly deliver the message to the intended recipient's email server. Since Red Hat Enterprise Linux offers two MTAs, Postfix and Sendmail , email client programs are often not required to act as an MTA. Red Hat Enterprise Linux also includes a special purpose MTA called Fetchmail . For more information on Postfix, Sendmail, and Fetchmail, see Section 19.3, "Mail Transport Agents" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-email-types
Chapter 12. Booting the Installation on IBM Power Systems
Chapter 12. Booting the Installation on IBM Power Systems To boot an IBM Power Systems server from a DVD, you must specify the install boot device in the System Management Services (SMS) menu. To enter the System Management Services GUI, press the 1 key during the boot process when you hear the chime sound. This brings up a graphical interface similar to the one described in this section. On a text console, press 1 when the self test is displaying the banner along with the tested components: Figure 12.1. The SMS Console Once in the SMS menu, select the option for Select Boot Options . In that menu, specify Select Install or Boot a Device . There, select CD/DVD , and then the bus type (in most cases SCSI). If you are uncertain, you can select to view all devices. This scans all available buses for boot devices, including network adapters and hard drives. Finally, select the device containing the installation DVD. The boot menu will now load. Important Because IBM Power Systems servers primarily use text consoles, Anaconda will not automatically start a graphical installation. However, the graphical installation program offers more features and customization and is recommended if your system has a graphical display. To start a graphical installation, pass the inst.vnc boot option (see Enabling Remote Access ). 12.1. The Boot Menu Once your system has completed loading the boot media, a boot menu is displayed using GRUB2 ( GRand Unified Bootloader , version 2). The boot menu provides several options in addition to launching the installation program. If no key is pressed within 60 seconds, the default boot option (the one highlighted in white) will be run. To choose the default, either wait for the timer to run out or press Enter . Figure 12.2. The Boot Screen To select a different option than the default, use the arrow keys on your keyboard, and press Enter when the correct option is highlighted. To customize the boot options for a particular menu entry, press the e key and add custom boot options to the command line. When ready press Ctrl + X to boot the modified option. See Chapter 23, Boot Options for more information about additional boot options. The boot menu options are: Install Red Hat Enterprise Linux 7.0 Choose this option to install Red Hat Enterprise Linux onto your computer system using the graphical installation program. Test this media & install Red Hat Enterprise Linux 7.0 This option is the default. Prior to starting the installation program, a utility is launched to check the integrity of the installation media. Troubleshooting > This item is a separate menu containing options that help resolve various installation issues. When highlighted, press Enter to display its contents. Figure 12.3. The Troubleshooting Menu Install Red Hat Enterprise Linux 7.0 in basic graphics mode This option allows you to install Red Hat Enterprise Linux in graphical mode even if the installation program is unable to load the correct driver for your video card. If your screen appears distorted or goes blank when using the Install Red Hat Enterprise Linux 7.0 option, restart your computer and try this option instead. Rescue a Red Hat Enterprise Linux system Choose this option to repair a problem with your installed Red Hat Enterprise Linux system that prevents you from booting normally. The rescue environment contains utility programs that allow you fix a wide variety of these problems. Run a memory test This option runs a memory test on your system. For more information, see Section 23.2.1, "Loading the Memory (RAM) Testing Mode" . Boot from local drive This option boots the system from the first installed disk. If you booted this disc accidentally, use this option to boot from the hard disk immediately without starting the installation program.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-booting-installer-ppc
Installing on IBM Power
Installing on IBM Power OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_ibm_power/index
Chapter 4. Ceph authentication configuration
Chapter 4. Ceph authentication configuration As a storage administrator, authenticating users and services is important to the security of the Red Hat Ceph Storage cluster. Red Hat Ceph Storage includes the Cephx protocol, as the default, for cryptographic authentication, and the tools to manage authentication in the storage cluster. Red Hat Ceph Storage includes the Cephx protocol, as the default, for cryptographic authentication, and the tools to manage authentication in the storage cluster. As part of the Ceph authentication configuration, consider key rotation for your Ceph and gateway daemons for increased security. Key rotation is done through the command-line, with cephadm . See Enabling key rotation for more details. Prerequisites Installation of the Red Hat Ceph Storage software. 4.1. Cephx authentication The cephx protocol is enabled by default. Cryptographic authentication has some computational costs, though they are generally quite low. If the network environment connecting clients and hosts is considered safe and you cannot afford authentication computational costs, you can disable it. When deploying a Ceph storage cluster, the deployment tool will create the client.admin user and keyring. Important Red Hat recommends using authentication. Note If you disable authentication, you are at risk of a man-in-the-middle attack altering client and server messages, which could lead to significant security issues. Enabling and disabling Cephx Enabling Cephx requires that you have deployed keys for the Ceph Monitors and OSDs. When toggling Cephx authentication on or off, you do not have to repeat the deployment procedures. 4.2. Enabling Cephx When cephx is enabled, Ceph will look for the keyring in the default search path, which includes /etc/ceph/USDcluster.USDname.keyring . You can override this location by adding a keyring option in the [global] section of the Ceph configuration file, but this is not recommended. Execute the following procedures to enable cephx on a cluster with authentication disabled. If you or your deployment utility have already generated the keys, you may skip the steps related to generating keys. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Create a client.admin key, and save a copy of the key for your client host: Warning This will erase the contents of any existing /etc/ceph/client.admin.keyring file. Do not perform this step if a deployment tool has already done it for you. Create a keyring for the monitor cluster and generate a monitor secret key: Copy the monitor keyring into a ceph.mon.keyring file in every monitor mon data directory. For example, to copy it to mon.a in cluster ceph , use the following: Generate a secret key for every OSD, where ID is the OSD number: By default the cephx authentication protocol is enabled. Note If the cephx authentication protocol was disabled previously by setting the authentication options to none , then by removing the following lines under the [global] section in the Ceph configuration file ( /etc/ceph/ceph.conf ) will reenable the cephx authentication protocol: Start or restart the Ceph storage cluster. Important Enabling cephx requires downtime because the cluster needs to be completely restarted, or it needs to be shut down and then started while client I/O is disabled. These flags need to be set before restarting or shutting down the storage cluster: Once cephx is enabled and all PGs are active and clean, unset the flags: 4.3. Disabling Cephx The following procedure describes how to disable Cephx. If your cluster environment is relatively safe, you can offset the computation expense of running authentication. Important Red Hat recommends enabling authentication. However, it may be easier during setup or troubleshooting to temporarily disable authentication. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Disable cephx authentication by setting the following options in the [global] section of the Ceph configuration file: Example Start or restart the Ceph storage cluster. 4.4. Cephx user keyrings When you run Ceph with authentication enabled, the ceph administrative commands and Ceph clients require authentication keys to access the Ceph storage cluster. The most common way to provide these keys to the ceph administrative commands and clients is to include a Ceph keyring under the /etc/ceph/ directory. The file name is usually ceph.client.admin.keyring or USDcluster.client.admin.keyring . If you include the keyring under the /etc/ceph/ directory, you do not need to specify a keyring entry in the Ceph configuration file. Important Red Hat recommends copying the Red Hat Ceph Storage cluster keyring file to nodes where you will run administrative commands, because it contains the client.admin key. To do so, execute the following command: Replace USER with the user name used on the host with the client.admin key and HOSTNAME with the host name of that host. Note Ensure the ceph.keyring file has appropriate permissions set on the client machine. You can specify the key itself in the Ceph configuration file using the key setting, which is not recommended, or a path to a key file using the keyfile setting. 4.5. Cephx daemon keyrings Administrative users or deployment tools might generate daemon keyrings in the same way as generating user keyrings. By default, Ceph stores daemons keyrings inside their data directory. The default keyring locations, and the capabilities necessary for the daemon to function. Note The monitor keyring contains a key but no capabilities, and is not part of the Ceph storage cluster auth database. The daemon data directory locations default to directories of the form: Example You can override these locations, but it is not recommended. 4.6. Cephx message signatures Ceph provides fine-grained control so you can enable or disable signatures for service messages between the client and Ceph. You can enable or disable signatures for messages between Ceph daemons. Important Red Hat recommends that Ceph authenticate all ongoing messages between the entities using the session key set up for that initial authentication. Note Ceph kernel modules do not support signatures yet.
[ "ceph auth get-or-create client.admin mon 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring", "ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'", "cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring", "ceph auth get-or-create osd. ID mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph- ID /keyring", "auth_cluster_required = none auth_service_required = none auth_client_required = none", "ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause", "ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause", "auth_cluster_required = none auth_service_required = none auth_client_required = none", "scp USER @ HOSTNAME :/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring", "/var/lib/ceph/USDtype/ CLUSTER - ID", "/var/lib/ceph/osd/ceph-12" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/ceph-authentication-configuration
Chapter 9. Troubleshooting and deleting remaining resources during Uninstall
Chapter 9. Troubleshooting and deleting remaining resources during Uninstall Occasionally some of the custom resources managed by an operator may remain in "Terminating" status waiting on the finalizer to complete, although you have performed all the required cleanup tasks. In such an event you need to force the removal of such resources. If you do not do so, the resources remain in the Terminating state even after you have performed all the uninstall steps. Check if the openshift-storage namespace is stuck in the Terminating state upon deletion. Output: Check for the NamespaceFinalizersRemaining and NamespaceContentRemaining messages in the STATUS section of the command output and perform the step for each of the listed resources. Example output : Delete all the remaining resources listed in the step. For each of the resources to be deleted, do the following: Get the object kind of the resource which needs to be removed. See the message in the above output. Example : message: Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io Here cephobjectstoreuser.ceph.rook.io is the object kind. Get the Object name corresponding to the object kind. Example : Example output: Patch the resources. Example: Output: Verify that the openshift-storage project is deleted. Output: If the issue persists, reach out to Red Hat Support .
[ "oc get project -n <namespace>", "NAME DISPLAY NAME STATUS openshift-storage Terminating", "oc get project openshift-storage -o yaml", "status: conditions: - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All resources successfully discovered reason: ResourcesDiscovered status: \"False\" type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: \"False\" type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: \"False\" type: NamespaceDeletionContentFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some resources are remaining: cephobjectstoreusers.ceph.rook.io has 1 resource instances' reason: SomeResourcesRemain status: \"True\" type: NamespaceContentRemaining - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io in 1 resource instances' reason: SomeFinalizersRemain status: \"True\" type: NamespaceFinalizersRemaining", "oc get <Object-kind> -n <project-name>", "oc get cephobjectstoreusers.ceph.rook.io -n openshift-storage", "NAME AGE noobaa-ceph-objectstore-user 26h", "oc patch -n <project-name> <object-kind>/<object-name> --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "oc patch -n openshift-storage cephobjectstoreusers.ceph.rook.io/noobaa-ceph-objectstore-user --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user patched", "oc get project openshift-storage", "Error from server (NotFound): namespaces \"openshift-storage\" not found" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/troubleshooting-and-deleting-remaining-resources-during-uninstall_rhodf
Service Mesh
Service Mesh OpenShift Container Platform 4.16 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/service_mesh/index
Chapter 2. Installing a cluster on Nutanix
Chapter 2. Installing a cluster on Nutanix In OpenShift Container Platform version 4.14, you can choose one of the following options to install a cluster on your Nutanix instance: Using installer-provisioned infrastructure : Use the procedures in the following sections to use installer-provisioned infrastructure. Installer-provisioned infrastructure is ideal for installing in connected or disconnected network environments. The installer-provisioned infrastructure includes an installation program that provisions the underlying infrastructure for the cluster. Using the Assisted Installer : The Assisted Installer hosted at console.redhat.com . The Assisted Installer cannot be used in disconnected environments. The Assisted Installer does not provision the underlying infrastructure for the cluster, so you must provision the infrastructure before the running the Assisted Installer. Installing with the Assisted Installer also provides integration with Nutanix, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. Using user-provisioned infrastructure : Complete the relevant steps outlined in the Installing a cluster on any platform documentation. 2.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Internet access for Prism Central Prism Central requires internet access to obtain the Red Hat Enterprise Linux CoreOS (RHCOS) image that is required to install the cluster. The RHCOS image for Nutanix is available at rhcos.mirror.openshift.com . 2.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.6. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 2.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Nutanix". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 2.7.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 10 12 15 16 17 18 19 21 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 13 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 14 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.7.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 2.10. Adding config map and secret resources required for Nutanix CCM Installations on Nutanix require additional ConfigMap and Secret resources to integrate with the Nutanix Cloud Controller Manager (CCM). Prerequisites You have created a manifests directory within your installation directory. Procedure Navigate to the manifests directory: USD cd <path_to_installation_directory>/manifests Create the cloud-conf ConfigMap file with the name openshift-cloud-controller-manager-cloud-config.yaml and add the following information: apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: "{ \"prismCentral\": { \"address\": \"<prism_central_FQDN/IP>\", 1 \"port\": 9440, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }" 1 Specify the Prism Central FQDN/IP. Verify that the file cluster-infrastructure-02-config.yml exists and has the following information: spec: cloudConfig: key: config name: cloud-provider-config 2.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.12. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 2.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 2.14. Additional resources About remote health monitoring 2.15. steps Opt out of remote health reporting Customize your cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "cd <path_to_installation_directory>/manifests", "apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"", "spec: cloudConfig: key: config name: cloud-provider-config", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_nutanix/installing-nutanix-installer-provisioned
Chapter 65. Bean Validation
Chapter 65. Bean Validation Abstract Bean validation is a Java standard that enables you to define runtime constraints by adding Java annotations to service classes or interfaces. Apache CXF uses interceptors to integrate this feature with Web service method invocations. 65.1. Introduction Overview Bean Validation 1.1 ( JSR-349 )-which is an evolution of the original Bean Validation 1.0 (JSR-303) standard-enables you to declare constraints that can be checked at run time, using Java annotations. You can use annotations to define constraints on the following parts of the Java code: Fields in a bean class. Method and constructor parameters. Method return values. Example of annotated class The following example shows a Java class annotated with some standard bean validation constraints: Bean validation or schema validation? In some respects, bean validation and schema validation are quite similar. Configuring an endpoint with an XML schema is a well established way to validate messages at run time on a Web services endpoint. An XML schema can check many of the same constraints as bean validation on incoming and outgoing messages. Nevertheless, bean validation can sometimes be a useful alternative for one or more of the following reasons: Bean validation enables you to define constraints independently of the XML schema (which is useful, for example, in the case of code-first service development). If your current XML schema is too lax, you can use bean validation to define stricter constraints. Bean validation lets you define custom constraints, which might be impossible to define using XML schema language. Dependencies The Bean Validation 1.1 (JSR-349) standard defines just the API, not the implementation. Dependencies must therefore be provided in two parts: Core dependencies -provide the bean validation 1.1 API, Java unified expression language API and implementation. Hibernate Validator dependencies -provides the implementation of bean validation 1.1. Core dependencies To use bean validation, you must add the following core dependencies to your project's Maven pom.xml file: Note The javax.el/javax.el-api and org.glassfish/javax.el dependencies provide the API and implementation of Java's unified expression language. This expression language is used internally by bean validation, but is not important at the application programming level. Hibernate Validator dependencies To use the Hibernate Validator implementation of bean validation, you must add the following additional dependencies to your project's Maven pom.xml file: Resolving the validation provider in an OSGi environment The default mechanism for resolving a validation provider involves scanning the classpath to find the provider resource. In the case of an OSGi (Apache Karaf) environment, however, this mechanism does not work, because the validation provider (for example, the Hibernate validator) is packaged in a separate bundle and is thus not automatically available in your application classpath. In the context of OSGi, the Hibernate validator needs to be wired to your application bundle, and OSGi needs a bit of help to do this successfully. Configuring the validation provider explicitly in OSGi In the context of OSGi, you need to configure the validation provider explicitly, instead of relying on automatic discovery. For example, if you are using the common validation feature (see the section called "Bean validation feature" ) to enable bean validation, you must configure it with a validation provider, as follows: Where the HibernateValidationProviderResolver is a custom class that wraps the Hibernate validation provider. Example HibernateValidationProviderResolver class The following code example shows how to define a custom HibernateValidationProviderResolver , which resolves the Hibernate validator: When you build the preceding class in a Maven build system, which is configured to use the Maven bundle plug-in, your application will be wired to the Hibernate validator bundle at deploy time (assuming you have already deployed the Hibernate validator bundle to the OSGi container). 65.2. Developing Services with Bean Validation 65.2.1. Annotating a Service Bean Overview The first step in developing a service with bean validation is to apply the relevant validation annotations to the Java classes or interfaces that represent your services. The validation annotations enable you to apply constraints to method parameters, return values, and class fields, which are then checked at run time, every time the service is invoked. Validating simple input parameters To validate the parameters of a service method-where the parameters are simple Java types-you can apply any of the constraint annotations from the bean validation API ( javax.validation.constraints package). For example, the following code example tests both parameters for nullness ( @NotNull annotation), whether the id string matches the \\d+ regular expression ( @Pattern annotation), and whether the length of the name string lies in the range 1 to 50: Validating complex input parameters To validate complex input parameters (object instances), apply the @Valid annotation to the parameter, as shown in the following example: The @Valid annotation does not specify any constraints by itself. When you annotate the Book parameter with @Valid , you are effectively telling the validation engine to look inside the definition of the Book class (recursively) to look for validation constraints. In this example, the Book class is defined with validation constraints on its id and name fields, as follows: Validating return values (non-Response) To apply validation to regular method return values (non-Response), add the annotations in front of the method signature. For example, to test the return value for nullness ( @NotNull annotation) and to test validation constraints recursively ( @Valid annotation), annotate the getBook method as follows: Validating return values (Response) To apply validation to a method that returns a javax.ws.rs.core.Response object, you can use the same annotations as in the non-Response case. For example: 65.2.2. Standard Annotations Bean validation constraints Table 65.1, "Standard Annotations for Bean Validation" shows the standard annotations defined in the Bean Validation specification, which can be used to define constraints on fields and on method return values and parameters (none of the standard annotations can be applied at the class level). Table 65.1. Standard Annotations for Bean Validation Annotation Applicable to Description @AssertFalse Boolean , boolean Checks that the annotated element is false . @AssertTrue Boolean , boolean Checks that the annotated element is true . @DecimalMax(value=, inclusive=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers When inclusive=false , checks that the annotated value is less than the specified maximum. Otherwise, checks that the value is less than or equal to the specified maximum. The value parameter specifies the maximum in BigDecimal string format. @DecimalMin(value=, inclusive=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers When inclusive=false , checks that the annotated value is greater than the specified minimum. Otherwise, checks that the value is greater than or equal to the specified minimum. The value parameter specifies the minimum in BigDecimal string format. @Digits(integer=, fraction=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers Checks whether the annotated value is a number having up to integer digits and fraction fractional digits. @Future java.util.Date , java.util.Calendar Checks whether the annotated date is in the future. @Max(value=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers Checks whether the annotated value is less than or equal to the specified maximum. @Min(value=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers Checks whether the annotated value is greater than or equal to the specified minimum. @NotNull Any type Checks that the annotated value is not null . @Null Any type Checks that the annotated value is null . @Past java.util.Date , java.util.Calendar Checks whether the annotated date is in the past. @Pattern(regex=, flag=) CharSequence Checks whether the annotated string matches the regular expression regex considering the given flag match. @Size(min=, max=) CharSequence , Collection , Map and arrays Checks whether the size of the annotated collection, map, or array lies between min and max (inclusive). @Valid Any non-primitive type Performs validation recursively on the annotated object. If the object is a collection or an array, the elements are validated recursively. If the object is a map, the value elements are validated recursively. 65.2.3. Custom Annotations Defining custom constraints in Hibernate It is possible to define your own custom constraints annotations with the bean validation API. For details of how to do this in the Hibernate validator implementation, see the Creating custom constraints chapter of the Hibernate Validator Reference Guide . 65.3. Configuring Bean Validation 65.3.1. JAX-WS Configuration Overview This section describes how to enable bean validation on a JAX-WS service endpoint, which is defined either in Blueprint XML or in Spring XML. The interceptors used to perform bean validation are common to both JAX-WS endpoints and JAX-RS 1.1 endpoints (JAX-RS 2.0 endpoints use different interceptor classes, however). Namespaces In the XML examples shown in this section, you must remember to map the jaxws namespace prefix to the appropriate namespace, either for Blueprint or Spring, as shown in the following table: XML Language Namespace Blueprint http://cxf.apache.org/blueprint/jaxws Spring http://cxf.apache.org/jaxws Bean validation feature The simplest way to enable bean validation on a JAX-WS endpoint is to add the bean validation feature to the endpoint. The bean validation feature is implemented by the following class: org.apache.cxf.validation.BeanValidationFeature By adding an instance of this feature class to the JAX-WS endpoint (either through the Java API or through the jaxws:features child element of jaxws:endpoint in XML), you can enable bean validation on the endpoint. This feature installs two interceptors: an In interceptor that validates incoming message data; and an Out interceptor that validates return values (where the interceptors are created with default configuration parameters). Sample JAX-WS configuration with bean validation feature The following XML example shows how to enable bean validation functionality in a JAX-WS endpoint, by adding the commonValidationFeature bean to the endpoint as a JAX-WS feature: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Note Remember to map the jaxws prefix to the appropriate XML namespace for either Blueprint or Spring, depending on the context. Common bean validation 1.1 interceptors If you want to have more fine-grained control over the configuration of the bean validation, you can install the interceptors individually, instead of using the bean validation feature. In place of the bean validation feature, you can configure one or both of the following interceptors: org.apache.cxf.validation.BeanValidationInInterceptor When installed in a JAX-WS (or JAX-RS 1.1) endpoint, validates resource method parameters against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxws:inInterceptors child element in XML (or the jaxrs:inInterceptors child element in XML). org.apache.cxf.validation.BeanValidationOutInterceptor When installed in a JAX-WS (or JAX-RS 1.1) endpoint, validates response values against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxws:outInterceptors child element in XML (or the jaxrs:outInterceptors child element in XML). Sample JAX-WS configuration with bean validation interceptors The following XML example shows how to enable bean validation functionality in a JAX-WS endpoint, by explicitly adding the relevant In interceptor bean and Out interceptor bean to the endpoint: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Configuring a BeanValidationProvider The org.apache.cxf.validation.BeanValidationProvider is a simple wrapper class that wraps the bean validation implementation ( validation provider ). By overriding the default BeanValidationProvider class, you can customize the implementation of bean validation. The BeanValidationProvider bean enables you to override one or more of the following provider classes: javax.validation.ParameterNameProvider Provides names for method and constructor parameters. Note that this class is needed, because the Java reflection API does not give you access to the names of method parameters or constructor parameters. javax.validation.spi.ValidationProvider<T> Provides an implementation of bean validation for the specified type, T . By implementing your own ValidationProvider class, you can define custom validation rules for your own classes. This mechanism effectively enables you to extend the bean validation framework. javax.validation.ValidationProviderResolver Implements a mechanism for discovering ValidationProvider classes and returns a list of the discovered classes. The default resolver looks for a META-INF/services/javax.validation.spi.ValidationProvider file on the classpath, which should contain a list of ValidationProvider classes. javax.validation.ValidatorFactory A factory that returns javax.validation.Validator instances. org.apache.cxf.validation.ValidationConfiguration A CXF wrapper class that enables you override more classes from the validation provider layer. To customize the BeanValidationProvider , pass a custom BeanValidationProvider instance to the constructor of the validation In interceptor and to the constructor of the validation Out interceptor. For example: 65.3.2. JAX-RS Configuration Overview This section describes how to enable bean validation on a JAX-RS service endpoint, which is defined either in Blueprint XML or in Spring XML. The interceptors used to perform bean validation are common to both JAX-WS endpoints and JAX-RS 1.1 endpoints (JAX-RS 2.0 endpoints use different interceptor classes, however). Namespaces In the XML examples shown in this section, you must remember to map the jaxws namespace prefix to the appropriate namespace, either for Blueprint or Spring, as shown in the following table: XML Language Namespace Blueprint http://cxf.apache.org/blueprint/jaxws Spring http://cxf.apache.org/jaxws Bean validation feature The simplest way to enable bean validation on a JAX-RS endpoint is to add the bean validation feature to the endpoint. The bean validation feature is implemented by the following class: org.apache.cxf.validation.BeanValidationFeature By adding an instance of this feature class to the JAX-RS endpoint (either through the Java API or through the jaxrs:features child element of jaxrs:server in XML), you can enable bean validation on the endpoint. This feature installs two interceptors: an In interceptor that validates incoming message data; and an Out interceptor that validates return values (where the interceptors are created with default configuration parameters). Validation exception mapper A JAX-RS endpoint also requires you to configure a validation exception mapper , which is responsible for mapping validation exceptions to HTTP error responses. The following class implements validation exception mapping for JAX-RS: org.apache.cxf.jaxrs.validation.ValidationExceptionMapper Implements validation exception mapping in accordance with the JAX-RS 2.0 specification: any input parameter validation violations are mapped to HTTP status code 400 Bad Request ; and any return value validation violation (or internal validation violation) is mapped to HTTP status code 500 Internal Server Error . Sample JAX-RS configuration The following XML example shows how to enable bean validation functionality in a JAX-RS endpoint, by adding the commonValidationFeature bean as a JAX-RS feature and by adding the exceptionMapper bean as a JAX-RS provider: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Note Remember to map the jaxrs prefix to the appropriate XML namespace for either Blueprint or Spring, depending on the context. Common bean validation 1.1 interceptors Instead of using the bean validation feature, you can optionally install bean validation interceptors to get more fine-grained control over the validation implementation. JAX-RS uses the same interceptors as JAX-WS for this purpose-see the section called "Common bean validation 1.1 interceptors" Sample JAX-RS configuration with bean validation interceptors The following XML example shows how to enable bean validation functionality in a JAX-RS endpoint, by explicitly adding the relevant In interceptor bean and Out interceptor bean to the server endpoint: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Configuring a BeanValidationProvider You can inject a custom BeanValidationProvider instance into the validation interceptors, as described in the section called "Configuring a BeanValidationProvider" . 65.3.3. JAX-RS 2.0 Configuration Overview Unlike JAX-RS 1.1 (which shares common validation interceptors with JAX-WS), the JAX-RS 2.0 configuration relies on dedicated validation interceptor classes that are specific to JAX-RS 2.0. Bean validation feature For JAX-RS 2.0, there is a dedicated bean validation feature, which is implemented by the following class: org.apache.cxf.validation.JAXRSBeanValidationFeature By adding an instance of this feature class to the JAX-RS endpoint (either through the Java API or through the jaxrs:features child element of jaxrs:server in XML), you can enable bean validation on a JAX-RS 2.0 server endpoint. This feature installs two interceptors: an In interceptor that validates incoming message data; and an Out interceptor that validates return values (where the interceptors are created with default configuration parameters). Validation exception mapper JAX-RS 2.0 uses the same validation exception mapper class as JAX-RS 1.x: org.apache.cxf.jaxrs.validation.ValidationExceptionMapper Implements validation exception mapping in accordance with the JAX-RS 2.0 specification: any input parameter validation violations are mapped to HTTP status code 400 Bad Request ; and any return value validation violation (or internal validation violation) is mapped to HTTP status code 500 Internal Server Error . Bean validation invoker If you configure the JAX-RS service with a non-default lifecycle policy (for example, using Spring lifecycle management), you should also register a org.apache.cxf.jaxrs.validation.JAXRSBeanValidationInvoker instance-using the jaxrs:invoker element in the endpoint configuration-with the service endpoint, to ensure that bean validation is invoked correctly. For more details about JAX-RS service lifecycle management, see the section called "Lifecycle management in Spring XML" . Sample JAX-RS 2.0 configuration with bean validation feature The following XML example shows how to enable bean validation functionality in a JAX-RS 2.0 endpoint, by adding the jaxrsValidationFeature bean as a JAX-RS feature and by adding the exceptionMapper bean as a JAX-RS provider: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Note Remember to map the jaxrs prefix to the appropriate XML namespace for either Blueprint or Spring, depending on the context. Common bean validation 1.1 interceptors If you want to have more fine-grained control over the configuration of the bean validation, you can install the JAX-RS interceptors individually, instead of using the bean validation feature. Configure one or both of the following JAX-RS interceptors: org.apache.cxf.validation.JAXRSBeanValidationInInterceptor When installed in a JAX-RS 2.0 server endpoint, validates resource method parameters against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxrs:inInterceptors child element in XML. org.apache.cxf.validation.JAXRSBeanValidationOutInterceptor When installed in a JAX-RS 2.0 endpoint, validates response values against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxrs:inInterceptors child element in XML. Sample JAX-RS 2.0 configuration with bean validation interceptors The following XML example shows how to enable bean validation functionality in a JAX-RS 2.0 endpoint, by explicitly adding the relevant In interceptor bean and Out interceptor bean to the server endpoint: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Configuring a BeanValidationProvider You can inject a custom BeanValidationProvider instance into the validation interceptors, as described in the section called "Configuring a BeanValidationProvider" . Configuring a JAXRSParameterNameProvider The org.apache.cxf.jaxrs.validation.JAXRSParameterNameProvider class is an implementation of the javax.validation.ParameterNameProvider interface, which can be used to provide the names for method and constructor parameters in the context of JAX-RS 2.0 endpoints.
[ "// Java import javax.validation.constraints.NotNull; import javax.validation.constraints.Max; import javax.validation.Valid; public class Person { @NotNull private String firstName; @NotNull private String lastName; @Valid @NotNull private Person boss; public @NotNull String saveItem( @Valid @NotNull Person person, @Max( 23 ) BigDecimal age ) { // } }", "<dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>1.1.0.Final</version> </dependency> <dependency> <groupId>javax.el</groupId> <artifactId>javax.el-api</artifactId> <!-- use 3.0-b02 version for Java 6 --> <version>3.0.0</version> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.el</artifactId> <!-- use 3.0-b01 version for Java 6 --> <version>3.0.0</version> </dependency>", "<dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>5.0.3.Final</version> </dependency>", "<bean id=\"commonValidationFeature\" class=\"org.apache.cxf.validation.BeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>", "// Java package org.example; import static java.util.Collections.singletonList; import org.hibernate.validator.HibernateValidator; import javax.validation.ValidationProviderResolver; import java.util.List; /** * OSGi-friendly implementation of {@code javax.validation.ValidationProviderResolver} returning * {@code org.hibernate.validator.HibernateValidator} instance. * */ public class HibernateValidationProviderResolver implements ValidationProviderResolver { @Override public List getValidationProviders() { return singletonList(new HibernateValidator()); } }", "import javax.validation.constraints.NotNull; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size; @POST @Path(\"/books\") public Response addBook( @NotNull @Pattern(regexp = \"\\\\d+\") @FormParam(\"id\") String id, @NotNull @Size(min = 1, max = 50) @FormParam(\"name\") String name) { // do some work return Response.created().build(); }", "import javax.validation.Valid; @POST @Path(\"/books\") public Response addBook( @Valid Book book ) { // do some work return Response.created().build(); }", "import javax.validation.constraints.NotNull; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size; public class Book { @NotNull @Pattern(regexp = \"\\\\d+\") private String id; @NotNull @Size(min = 1, max = 50) private String name; // }", "import javax.validation.constraints.NotNull; import javax.validation.Valid; @GET @Path(\"/books/{bookId}\") @Override @NotNull @Valid public Book getBook(@PathParam(\"bookId\") String id) { return new Book( id ); }", "import javax.validation.constraints.NotNull; import javax.validation.Valid; import javax.ws.rs.core.Response; @GET @Path(\"/books/{bookId}\") @Valid @NotNull public Response getBookResponse(@PathParam(\"bookId\") String id) { return Response.ok( new Book( id ) ).build(); }", "<jaxws:endpoint xmlns:s=\"http://bookworld.com\" serviceName=\"s:BookWorld\" endpointName=\"s:BookWorldPort\" implementor=\"#bookWorldValidation\" address=\"/bwsoap\"> <jaxws:features> <ref bean=\"commonValidationFeature\" /> </jaxws:features> </jaxws:endpoint> <bean id=\"bookWorldValidation\" class=\"org.apache.cxf.systest.jaxrs.validation.spring.BookWorldImpl\"/> <bean id=\"commonValidationFeature\" class=\"org.apache.cxf.validation.BeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>", "<jaxws:endpoint xmlns:s=\"http://bookworld.com\" serviceName=\"s:BookWorld\" endpointName=\"s:BookWorldPort\" implementor=\"#bookWorldValidation\" address=\"/bwsoap\"> <jaxws:inInterceptors> <ref bean=\"validationInInterceptor\" /> </jaxws:inInterceptors> <jaxws:outInterceptors> <ref bean=\"validationOutInterceptor\" /> </jaxws:outInterceptors> </jaxws:endpoint> <bean id=\"bookWorldValidation\" class=\"org.apache.cxf.systest.jaxrs.validation.spring.BookWorldImpl\"/> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.validation.BeanValidationInInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.validation.BeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>", "<bean id=\"validationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\" /> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.validation.BeanValidationInInterceptor\"> <property name=\"provider\" ref=\"validationProvider\" /> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.validation.BeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"validationProvider\" /> </bean>", "<jaxrs:server address=\"/bwrest\"> <jaxrs:serviceBeans> <ref bean=\"bookWorldValidation\"/> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> <jaxrs:features> <ref bean=\"commonValidationFeature\" /> </jaxrs:features> </jaxrs:server> <bean id=\"bookWorldValidation\" class=\"org.apache.cxf.systest.jaxrs.validation.spring.BookWorldImpl\"/> <beanid=\"exceptionMapper\"class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"commonValidationFeature\" class=\"org.apache.cxf.validation.BeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>", "<jaxrs:server address=\"/\"> <jaxrs:inInterceptors> <ref bean=\"validationInInterceptor\" /> </jaxrs:inInterceptors> <jaxrs:outInterceptors> <ref bean=\"validationOutInterceptor\" /> </jaxrs:outInterceptors> <jaxrs:serviceBeans> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> </jaxrs:server> <bean id=\"exceptionMapper\" class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.validation.BeanValidationInInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.validation.BeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>", "<jaxrs:server address=\"/\"> <jaxrs:serviceBeans> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> <jaxrs:features> <ref bean=\"jaxrsValidationFeature\" /> </jaxrs:features> </jaxrs:server> <bean id=\"exceptionMapper\" class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"jaxrsValidationFeature\" class=\"org.apache.cxf.validation.JAXRSBeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>", "<jaxrs:server address=\"/\"> <jaxrs:inInterceptors> <ref bean=\"validationInInterceptor\" /> </jaxrs:inInterceptors> <jaxrs:outInterceptors> <ref bean=\"validationOutInterceptor\" /> </jaxrs:outInterceptors> <jaxrs:serviceBeans> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> </jaxrs:server> <bean id=\"exceptionMapper\" class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.jaxrs.validation.JAXRSBeanValidationInInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.jaxrs.validation.JAXRSBeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/Validation
Generating a custom LLM using RHEL AI
Generating a custom LLM using RHEL AI Red Hat Enterprise Linux AI 1.4 Using SDG, training, and evaluation to create a custom LLM Red Hat RHEL AI Documentation Team
[ "ilab data generate --num-cpus 4", "ilab data generate", "Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120", "INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 74/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help.", "INFO 2024-08-16 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-08-16 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s", "ls 2024-03-24_194933", "knowledge_recipe_2024-03-24T20_54_21.yaml skills_recipe_2024-03-24T20_54_21.yaml knowledge_train_msgs_2024-03-24T20_54_21.jsonl skills_train_msgs_2024-03-24T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-03-24T20_54_21.jsonl node_datasets_2024-03-24T15_12_12/", "cat ~/.local/share/datasets/<generation-date>/<jsonl-dataset>", "{\"messages\":[{\"content\":\"I am, Red Hat\\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.\",\"role\":\"system\"},{\"content\":\"<|user|>\\n### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012.", "ilab data generate -dt", "INFO 2025-01-15 11:36:47,557 instructlab.process.process:236: Started subprocess with PID 68289. Logs are being written to /Users/<user-name>/.local/share/instructlab/logs/generation/generation-e85623ac-d35e-11ef-bc70-2a1c6126d703.log.", "ilab process list", "+------------+-------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+----------+---------+ | Type | PID | UUID | Log File | Runtime | Status | +------------+-------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+----------+---------+ | Generation | 30334 | f2623406-de55-11ef-b684-2a1c6126d703 | /Users/<user-name>/.local/share/instructlab/logs/generation/generation-f2623406-de55-11ef-b684-2a1c6126d703.log| 00:08:30 | Running | +------------+-------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+----------+---------+", "ilab process attach --latest", "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<generation-date>/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<generation-date>/<skills-train-messages-jsonl-file>", "ilab model train --strategy lab-skills-only --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>", "Training Phase 1/2 TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "Training Phase 2/2 TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "MT-Bench evaluation for Phase 2 Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1", "Training finished! Best final checkpoint: samples_1945 with score: 6.813759384", "ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/", "samples_1711 samples_1945 samples_1456 samples_1462 samples_1903", "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<generation-date>/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<generation-date>/<skills-train-messages-jsonl-file>", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? n", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? y", "ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --tasks-dir ~/.local/share/instructlab/datasets/<generation-date>/<node-dataset> --base-model ~/.cache/instructlab/models/granite-7b-starter", "KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04)", "ilab model evaluate --benchmark mt_bench_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 --branch <worker-branch> --base-branch <worker-branch>", "SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5)", "ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665", "KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46", "ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665", "SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05", "{\"user_input\":\"What is the capital of Canada?\",\"reference\":\"The capital of Canada is Ottawa.\"}", "ilab model evaluate --benchmark dk_bench --input-questions <path-to-jsonl-file> --model <path-to-model>", "ilab model evaluate --benchmark dk_bench --input-questions /home/use/path/to/questions.jsonl --model ~/.cache/instructlab/models/instructlab/granite-7b-lab", "DK-BENCH REPORT ## MODEL: granite-7b-lab Question #1: 5/5 Question #2: 5/5 Question #3: 5/5 Question #4: 5/5 Question #5: 2/5 Question #6: 3/5 Question #7: 2/5 Question #8: 3/5 Question #9: 5/5 Question #10: 5/5 ---------------------------- Average Score: 4.00/5 Total Score: 40/50", "ilab model serve --model-path <path-to-best-performed-checkpoint>", "ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/", "ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.", "ilab model chat --model <path-to-best-performed-checkpoint-file>", "ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945", "ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html-single/generating_a_custom_llm_using_rhel_ai/index
Chapter 111. ReplicasChangeStatus schema reference
Chapter 111. ReplicasChangeStatus schema reference Used in: KafkaTopicStatus Property Property type Description targetReplicas integer The target replicas value requested by the user. This may be different from .spec.replicas when a change is ongoing. state string (one of [ongoing, pending]) Current state of the replicas change operation. This can be pending , when the change has been requested, or ongoing , when the change has been successfully submitted to Cruise Control. message string Message for the user related to the replicas change request. This may contain transient error messages that would disappear on periodic reconciliations. sessionId string The session identifier for replicas change requests pertaining to this KafkaTopic resource. This is used by the Topic Operator to track the status of ongoing replicas change operations.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-replicaschangestatus-reference
Chapter 6. Ceph File System quotas
Chapter 6. Ceph File System quotas As a storage administrator, you can view, set, and remove quotas on any directory in the file system. You can place quota restrictions on the number of bytes or the number of files within the directory. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Make sure that the attr package is installed. 6.1. Ceph File System quotas The Ceph File System (CephFS) quotas allow you to restrict the number of bytes or the number of files stored in the directory structure. Ceph File System quotas are fully supported using a FUSE client or using Kernel clients, version 4.17 or newer. Limitations CephFS quotas rely on the cooperation of the client mounting the file system to stop writing data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial, untrusted client from filling the file system. Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop. When using path-based access restrictions, be sure to configure the quota on the directory to which the client is restricted, or to a directory nested beneath it. If the client has restricted access to a specific path based on the MDS capability, and the quota is configured on an ancestor directory that the client cannot access, the client will not enforce the quota. For example, if the client cannot access the /home/ directory and the quota is configured on /home/ , the client cannot enforce that quota on the directory /home/user/ . Snapshot file data that has been deleted or changed does not count towards the quota. No support for quotas with NFS clients when using setxattr , and no support for file-level quotas on NFS. To use quotas on NFS shares, you can export them using subvolumes and setting the --size option. 6.2. Viewing quotas Use the getfattr command and the ceph.quota extended attributes to view the quota settings for a directory. Note If the attributes appear on a directory inode, then that directory has a configured quota. If the attributes do not appear on the inode, then the directory does not have a quota set, although its parent directory might have a quota configured. If the value of the extended attribute is 0 , the quota is not set. Prerequisites Root-level access to the Ceph client node. The attr package is installed. Procedure To view CephFS quotas. Using a byte-limit quota: Syntax Example In this example, 100000000 equals 100 MB. Using a file-limit quota: Syntax Example In this example, 10000 equals 10,000 files. Additional Resources See the getfattr(1) manual page for more information. 6.3. Setting quotas This section describes how to use the setfattr command and the ceph.quota extended attributes to set the quota for a directory. Prerequisites Root-level access to the Ceph client node. The attr package is installed. Procedure Set the quota for a direcotry by using a byte-limit quota: Note The following values are supported for byte-limit quota: K, Ki, M, Mi, G, Gi, T, and Ti. Syntax Example Set the quota for a directory by using a file-limit quota: Syntax Example In this example, 10000 equals 10,000 files. Note Only numerical values are supported for the file LIMIT_VALUE . Additional Resources See the setfattr(1) manual page for more information. 6.4. Removing quotas This section describes how to use the setfattr command and the ceph.quota extended attributes to remove a quota from a directory. Prerequisites Root-level access to the Ceph client node. Make sure that the attr package is installed. Procedure To remove CephFS quotas. Using a byte-limit quota: Syntax Example Using a file-limit quota: Syntax Example Additional Resources See the setfattr(1) manual page for more information. Additional Resources See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . See the getfattr(1) manual page for more information. See the setfattr(1) manual page for more information.
[ "getfattr -n ceph.quota.max_bytes DIRECTORY", "getfattr -n ceph.quota.max_bytes /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_bytes=\"100000000\"", "getfattr -n ceph.quota.max_files DIRECTORY", "getfattr -n ceph.quota.max_files /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_files=\"10000\"", "setfattr -n ceph.quota.max_bytes -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 2T /cephfs/", "setfattr -n ceph.quota.max_files -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_files -v 10000 /cephfs/", "setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 0 /mnt/cephfs/", "setfattr -n ceph.quota.max_files -v 0 DIRECTORY", "setfattr -n ceph.quota.max_files -v 0 /mnt/cephfs/" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/ceph-file-system-quotas
Chapter 5. Uninstalling Server and Replica Containers
Chapter 5. Uninstalling Server and Replica Containers This chapter describes how you can uninstall an Identity Management server or replica container. 5.1. Uninstalling a Server or Replica Container This procedure shows how to uninstall an Identity Management server or replica container and make sure the server or replica is properly removed from the topology. Procedure To ensure that a replica container belonging to an existing topology is properly removed from that topology, use the ipa server-del <container-host-name> command on any enrolled host. This step is necessary because the atomic uninstall command does not: Perform checks to prevent disconnected domain level 1 topology or the removal of the last certificate authority (CA), key recovery authority (KRA), or DNS server Remove the replica container from the existing topology Use the atomic uninstall command, and include the container name and image name: 5.2. Steps After Uninstalling You can find a backup of the container's mounted data directory under /var/lib/<container_name>.backup.<timestamp> . If you need to create a new container, the backup enables you to reuse the persistent data stored in the volume.
[ "atomic uninstall --name <container_name> rhel7/ipa-server" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/using_containerized_identity_management_services/uninstalling-server-and-replica-containers
Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes
Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes As a storage administrator, you can use Red Hat's Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack's file system service (Manila) by having a common command-line interface to interact with. The volumes module for the Ceph Manager daemon ( ceph-mgr ) implements the ability to export Ceph File Systems (CephFS). The Ceph Manager volumes module implements the following file system export abstractions: CephFS volumes CephFS subvolume groups CephFS subvolumes This chapter describes how to work with: Ceph File System volumes Ceph File System subvolume groups Ceph File System subvolumes 4.1. Ceph File System volumes As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems. This section describes how to: Create a file system volume. List file system volume. Remove a file system volume. 4.1.1. Creating a file system volume Ceph Manager's orchestrator module creates a Metadata Server (MDS) for the Ceph File System (CephFS). This section describes how to create a CephFS volume. Note This creates the Ceph File System, along with the data and metadata pools. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS volume: Syntax Example 4.1.2. Listing file system volume This section describes the step to list the Ceph File system (CephFS) volumes. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure List the CephFS volume: Example 4.1.3. Removing a file system volume Ceph Manager's orchestrator module removes the Metadata Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File System (CephFS) volume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure If the mon_allow_pool_delete option is not set to true , then set it to true before removing the CephFS volume: Example Remove the CephFS volume: Syntax Example 4.2. Ceph File System subvolume groups As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes. Starting with Red Hat Ceph Storage 5.0, the subvolume group snapshot feature is not supported. You can only list and remove the existing snapshots of these subvolume groups. This section describes how to: Create a file system subvolume group. List file system subvolume groups. Fetch absolute path of a file system subvolume group. Create snapshot of a file system subvolume group. List snapshots of a file system subvolume group. Remove snapshot of a file system subvolume group. Remove a file system subvolume group. 4.2.1. Creating a file system subvolume group This section describes how to create a Ceph File System (CephFS) subvolume group. Note When creating a subvolume group, you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode '755', uid '0', gid '0', and data pool layout of its parent directory. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume group: Syntax Example The command succeeds even if the subvolume group already exists. 4.2.2. Listing file system subvolume groups This section describes the step to list the Ceph File System (CephFS) subvolume groups. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure List the CephFS subvolume groups: Syntax Example 4.2.3. Fetching absolute path of a file system subvolume group This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Fetch the absolute path of the CephFS subvolume group: Syntax Example 4.2.4. Creating snapshot of a file system subvolume group This section shows how to create snapshots of Ceph File system (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. CephFS subvolume group. In addition to read ( r ) and write ( w ) capabilities, clients also require s flag on a directory path within the file system. Procedure Verify that the s flag is set on the directory: Syntax Example 1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a . Create a snapshot of the CephFS subvolume group: Syntax Example The command implicitly snapshots all the subvolumes under the subvolume group. 4.2.5. Listing snapshots of a file system subvolume group This section provides the steps to list the snapshots of a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Snapshots of the subvolume group. Procedure List the snapshots of a CephFS subvolume group: Syntax Example 4.2.6. Removing snapshot of a file system subvolume group This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume group: Syntax Example 4.2.7. Removing a file system subvolume group This section shows how to remove the Ceph File System (CephFS) subvolume group. Note The removal of a subvolume group fails if it is not empty or non-existent. The --force flag allows the non-existent subvolume group to be removed. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Remove the CephFS subvolume group: Syntax Example 4.3. Ceph File System subvolumes As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes. Additionally, you can also create, list, and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees. This section describes how to: Create a file system subvolume. List file system subvolume. Resizing a file system subvolume. Fetch absolute path of a file system subvolume. Fetch metadata of a file system subvolume. Create snapshot of a file system subvolume. Cloning subvolumes from snapshots. List snapshots of a file system subvolume. Fetching metadata of the snapshots of a file system subvolume. Remove a file system subvolume. Remove snapshot of a file system subvolume. 4.3.1. Creating a file system subvolume This section describes how to create a Ceph File System (CephFS) subvolume. Note When creating a subvolume, you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying the --namespace-isolated option. By default, a subvolume is created within the default subvolume group, and with an octal file mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume: Syntax Example The command succeeds even if the subvolume already exists. 4.3.2. Listing file system subvolume This section describes the step to list the Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure List the CephFS subvolume: Syntax Example 4.3.3. Resizing a file system subvolume This section describes the step to resize the Ceph File System (CephFS) subvolume. Note The ceph fs subvolume resize command resizes the subvolume quota using the size specified by new_size . The --no_shrink flag prevents the subvolume from shrinking below the currently used size of the subvolume. The subvolume can be resized to an infinite size by passing inf or infinite as the new_size . Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Resize a CephFS subvolume: Syntax Example 4.3.4. Fetching absolute path of a file system subvolume This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the absolute path of the CephFS subvolume: Syntax Example 4.3.5. Fetching metadata of a file system subvolume This section shows how to fetch metadata of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the metadata of a CephFS subvolume: Syntax Example Example output The output format is JSON and contains the following fields: atime : access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". bytes_pcent : quota used in percentage if quota is set, else displays "undefined". bytes_quota : quota size in bytes if quota is set, else displays "infinite". bytes_used : current used size of the subvolume in bytes. created_at : time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS". ctime : change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". data_pool : data pool the subvolume belongs to. features : features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention". flavor : subvolume version, either 1 for version one or 2 for version two. gid : group ID of subvolume path. mode : mode of subvolume path. mon_addrs : list of monitor addresses. mtime : modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". path : absolute path of a subvolume. pool_namespace : RADOS namespace of the subvolume. state : current state of the subvolume, such as, "complete" or "snapshot-retained". type : subvolume type indicating whether it is a clone or subvolume. uid : user ID of subvolume path. 4.3.6. Creating snapshot of a file system subvolume This section shows how to create snapshots of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. In addition to read ( r ) and write ( w ) capabilities, clients also require s flag on a directory path within the file system. Procedure Verify that the s flag is set on the directory: Syntax Example 1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a . Create a snapshot of the Ceph File System subvolume: Syntax Example 4.3.7. Cloning subvolumes from snapshots Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume. Note Cloning is inefficient for very large data sets. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. To create or delete snapshots, in addition to read and write capability, clients require s flag on a directory path within the filesystem. Syntax Example In the above example, client.0 can create or delete snapshots in the bar directory of filesystem cephfs_a . Procedure Create a Ceph File System (CephFS) volume: Syntax Example This creates the CephFS file system, its data and metadata pools. Create a subvolume group. By default, the subvolume group is created with an octal file mode '755', and data pool layout of its parent directory. Syntax Example Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit. Syntax Example Create a snapshot of a subvolume: Syntax Example Initiate a clone operation: Note By default, cloned subvolumes are created in the default group. If the source subvolume and the target clone are in the default group, run the following command: Syntax Example If the source subvolume is in the non-default group, then specify the source subvolume group in the following command: Syntax Example If the target clone is to a non-default group, then specify the target group in the following command: Syntax Example Check the status of the clone operation: Syntax Example Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide . 4.3.8. Listing snapshots of a file system subvolume This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure List the snapshots of a CephFS subvolume: Syntax Example 4.3.9. Fetching metadata of the snapshots of a file system subvolume This section provides the step to fetch the metadata of the snapshots of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with CephFS deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure Fetch the metadata of the snapshots of a CephFS subvolume: Syntax Example Example output The output format is JSON and contains the following fields: created_at : time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff". data_pool : data pool the snapshot belongs to. has_pending_clones : "yes" if snapshot clone is in progress otherwise "no". size : snapshot size in bytes. 4.3.10. Removing a file system subvolume This section describes the step to remove the Ceph File System (CephFS) subvolume. Note The ceph fs subvolume rm command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents. A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Remove a CephFS subvolume: Syntax Example To recreate a subvolume from a retained snapshot: Syntax NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it to a new subvolume. Example 4.3.11. Removing snapshot of a file system subvolume This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume: Syntax Example 4.4. Metadata information on Ceph File System subvolumes As a storage administrator, you can set, get, list, and remove metadata information of Ceph File System (CephFS) subvolumes. The custom metadata is for users to store their metadata in subvolumes. Users can store the key-value pairs similar to xattr in a Ceph File System. This section describes how to: Setting custom metadata on the file system subvolume Getting custom metadata on the file system subvolume Listing custom metadata on the file system subvolume Removing custom metadata from the file system subvolume 4.4.1. Setting custom metadata on the file system subvolume You can set custom metadata on the file system subvolume as a key-value pair. Note If the key_name already exists then the old value is replaced by the new value. Note The KEY_NAME and VALUE should be a string of ASCII characters as specified in python's string.printable . The KEY_NAME is case-insensitive and is always stored in lower case. Important Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph File System (CephFS), CephFS volume, subvolume group, and subvolume created. Procedure Set the metadata on the CephFS subvolume: Syntax Example Optional: Set the custom metadata with a space in the KEY_NAME : Example This creates another metadata with KEY_NAME as test meta for the VALUE cluster . Optional: You can also set the same metadata with a different value: Example 4.4.2. Getting custom metadata on the file system subvolume You can get the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure Get the metadata on the CephFS subvolume: Syntax Example 4.4.3. Listing custom metadata on the file system subvolume You can list the custom metadata associated with the key of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure List the metadata on the CephFS subvolume: Syntax Example 4.4.4. Removing custom metadata from the file system subvolume You can remove the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure Remove the custom metadata on the CephFS subvolume: Syntax Example List the metadata: Example 4.5. Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide .
[ "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs volume ls", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]", "ceph fs volume rm cephfs --yes-i-really-mean-it", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolumegroup ls VOLUME_NAME", "ceph fs subvolumegroup ls cephfs", "ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup getpath cephfs subgroup0", "ceph auth get CLIENT_NAME", "client.0 key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/bar 1 caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a 2", "ceph fs subvolumegroup snapshot create VOLUME_NAME GROUP_NAME SNAP_NAME", "ceph fs subvolumegroup snapshot create cephfs subgroup0 snap0", "ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup snapshot ls cephfs subgroup0", "ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]", "ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force", "ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]", "ceph fs subvolumegroup rm cephfs subgroup0 --force", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated", "ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume ls cephfs --group_name subgroup0", "ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]", "ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume getpath cephfs sub0 --group_name subgroup0", "ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume info cephfs sub0 --group_name subgroup0", "ceph fs subvolume info cephfs sub0 { \"atime\": \"2023-07-14 08:52:46\", \"bytes_pcent\": \"0.00\", \"bytes_quota\": 1024000000, \"bytes_used\": 0, \"created_at\": \"2023-07-14 08:52:46\", \"ctime\": \"2023-07-14 08:53:54\", \"data_pool\": \"cephfs.cephfs.data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"flavor\": \"2\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.0.208.172:6789\", \"10.0.211.197:6789\", \"10.0.209.212:6789\" ], \"mtime\": \"2023-07-14 08:52:46\", \"path\": \"/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }", "ceph auth get CLIENT_NAME", "ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" 1 caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\" 2", "ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path= DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data= DIRECTORY_NAME", "[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1", "ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]", "ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }", "ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0", "{ \"created_at\": \"2022-05-09 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }", "ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]", "ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots", "ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0", "ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]", "ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force", "ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test meta\" cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test_meta\" cluster2 --group_name subgroup0", "ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster", "ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata ls cephfs sub0 { \"test_meta\": \"cluster\" }", "ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0", "ceph fs subvolume metadata ls cephfs sub0 {}" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/management-of-ceph-file-system-volumes-subvolume-groups-and-subvolumes
Monitoring server and database activity
Monitoring server and database activity Red Hat Directory Server 12 Monitor Red Hat Directory Server activity, replication topology, and database activity Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/monitoring_server_and_database_activity/index
Chapter 3. Zero trust networking
Chapter 3. Zero trust networking Zero trust is an approach to designing security architectures based on the premise that every interaction begins in an untrusted state. This contrasts with traditional architectures, which might determine trustworthiness based on whether communication starts inside a firewall. More specifically, zero trust attempts to close gaps in security architectures that rely on implicit trust models and one-time authentication. OpenShift Container Platform can add some zero trust networking capabilities to containers running on the platform without requiring changes to the containers or the software running in them. There are also several products that Red Hat offers that can further augment the zero trust networking capabilities of containers. If you have the ability to change the software running in the containers, then there are other projects that Red Hat supports that can add further capabilities. Explore the following targeted capabilities of zero trust networking. 3.1. Root of trust Public certificates and private keys are critical to zero trust networking. These are used to identify components to one another, authenticate, and to secure traffic. The certificates are signed by other certificates, and there is a chain of trust to a root certificate authority (CA). Everything participating in the network needs to ultimately have the public key for a root CA so that it can validate the chain of trust. For public-facing things, these are usually the set of root CAs that are globally known, and whose keys are distributed with operating systems, web browsers, and so on. However, it is possible to run a private CA for a cluster or a corporation if the certificate of the private CA is distributed to all parties. Leverage: OpenShift Container Platform: OpenShift creates a cluster CA at installation that is used to secure the cluster resources. However, OpenShift Container Platform can also create and sign certificates for services in the cluster, and can inject the cluster CA bundle into a pod if requested. Service certificates created and signed by OpenShift Container Platform have a 26-month time to live (TTL) and are rotated automatically at 13 months. They can also be rotated manually if necessary. OpenShift cert-manager Operator : cert-manager allows you to request keys that are signed by an external root of trust. There are many configurable issuers to integrate with external issuers, along with ways to run with a delegated signing certificate. The cert-manager API can be used by other software in zero trust networking to request the necessary certificates (for example, Red Hat OpenShift Service Mesh), or can be used directly by customer software. 3.2. Traffic authentication and encryption Ensure that all traffic on the wire is encrypted and the endpoints are identifiable. An example of this is Mutual TLS, or mTLS, which is a method for mutual authentication. Leverage: OpenShift Container Platform: With transparent pod-to-pod IPsec , the source and destination of the traffic can be identified by the IP address. There is the capability for egress traffic to be encrypted using IPsec . By using the egress IP feature, the source IP address of the traffic can be used to identify the source of the traffic inside the cluster. Red Hat OpenShift Service Mesh : Provides powerful mTLS capabilities that can transparently augment traffic leaving a pod to provide authentication and encryption. OpenShift cert-manager Operator : Use custom resource definitions (CRDs) to request certificates that can be mounted for your programs to use for SSL/TLS protocols. 3.3. Identification and authentication After you have the ability to mint certificates using a CA, you can use it to establish trust relationships by verification of the identity of the other end of a connection - either a user or a client machine. This also requires management of certificate lifecycles to limit use if compromised. Leverage: OpenShift Container Platform: Cluster-signed service certificates to ensure that a client is talking to a trusted endpoint. This requires that the service uses SSL/TLS and that the client uses the cluster CA . The client identity must be provided using some other means. Red Hat Single Sign-On : Provides request authentication integration with enterprise user directories or third-party identity providers. Red Hat OpenShift Service Mesh : Transparent upgrade of connections to mTLS, auto-rotation, custom certificate expiration, and request authentication with JSON web token (JWT). OpenShift cert-manager Operator : Creation and management of certificates for use by your application. Certificates can be controlled by CRDs and mounted as secrets, or your application can be changed to interact directly with the cert-manager API. 3.4. Inter-service authorization It is critical to be able to control access to services based on the identity of the requester. This is done by the platform and does not require each application to implement it. That allows better auditing and inspection of the policies. Leverage: OpenShift Container Platform: Can enforce isolation in the networking layer of the platform using the Kubernetes NetworkPolicy and AdminNetworkPolicy objects. Red Hat OpenShift Service Mesh : Sophisticated L4 and L7 control of traffic using standard Istio objects and using mTLS to identify the source and destination of traffic and then apply policies based on that information. 3.5. Transaction-level verification In addition to the ability to identify and authenticate connections, it is also useful to control access to individual transactions. This can include rate-limiting by source, observability, and semantic validation that a transaction is well formed. Leverage: Red Hat OpenShift Service Mesh : Perform L7 inspection of requests, rejecting malformed HTTP requests, transaction-level observability and reporting . Service Mesh can also provide request-based authentication using JWT. 3.6. Risk assessment As the number of security policies in a cluster increase, visualization of what the policies allow and deny becomes increasingly important. These tools make it easier to create, visualize, and manage cluster security policies. Leverage: Red Hat OpenShift Service Mesh : Create and visualize Kubernetes NetworkPolicy and AdminNetworkPolicy , and OpenShift Networking EgressFirewall objects using the OpenShift web console . Red Hat Advanced Cluster Security for Kubernetes : Advanced visualization of objects . 3.7. Site-wide policy enforcement and distribution After deploying applications on a cluster, it becomes challenging to manage all of the objects that make up the security rules. It becomes critical to be able to apply site-wide policies and audit the deployed objects for compliance with the policies. This should allow for delegation of some permissions to users and cluster administrators within defined bounds, and should allow for exceptions to the policies if necessary. Leverage: Red Hat OpenShift Service Mesh : RBAC to control policy object s and delegate control. Red Hat Advanced Cluster Security for Kubernetes : Policy enforcement engine. Red Hat Advanced Cluster Management (RHACM) for Kubernetes : Centralized policy control. 3.8. Observability for constant, and retrospective, evaluation After you have a running cluster, you want to be able to observe the traffic and verify that the traffic comports with the defined rules. This is important for intrusion detection, forensics, and is helpful for operational load management. Leverage: Network Observability Operator : Allows for inspection, monitoring, and alerting on network connections to pods and nodes in the cluster. Red Hat Advanced Cluster Management (RHACM) for Kubernetes : Monitors, collects, and evaluates system-level events such as process execution, network connections and flows, and privilege escalation. It can determine a baseline for a cluster, and then detect anomalous activity and alert you about it. Red Hat OpenShift Service Mesh : Can monitor traffic entering and leaving a pod. Red Hat OpenShift distributed tracing platform : For suitably instrumented applications, you can see all traffic associated with a particular action as it splits into sub-requests to microservices. This allows you to identify bottlenecks within a distributed application. 3.9. Endpoint security It is important to be able to trust that the software running the services in your cluster has not been compromised. For example, you might need to ensure that certified images are run on trusted hardware, and have policies to only allow connections to or from an endpoint based on endpoint characteristics. Leverage: OpenShift Container Platform: Secureboot can ensure that the nodes in the cluster are running trusted software, so the platform itself (including the container runtime) have not been tampered with. You can configure OpenShift Container Platform to only run images that have been signed by certain signatures . Red Hat Trusted Artifact Signer : This can be used in a trusted build chain and produce signed container images. 3.10. Extending trust outside of the cluster You might want to extend trust outside of the cluster by allowing a cluster to mint CAs for a subdomain. Alternatively, you might want to attest to workload identity in the cluster to a remote endpoint. Leverage: OpenShift cert-manager Operator : You can use cert-manager to manage delegated CAs so that you can distribute trust across different clusters, or through your organization. Red Hat OpenShift Service Mesh : Can use SPIFFE to provide remote attestation of workloads to endpoints running in remote or local clusters.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/zero-trust-networking
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports/proc-providing-feedback-on-redhat-documentation
Chapter 144. HDFS Component (deprecated)
Chapter 144. HDFS Component (deprecated) Available as of Camel version 2.8 The hdfs component enables you to read and write messages from/to an HDFS file system. HDFS is the distributed file system at the heart of Hadoop . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hdfs</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 144.1. URI format hdfs://hostname[:port][/path][?options] You can append query options to the URI in the following format, ?option=value&option=value&... The path is treated in the following way: as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type. as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured UuidGenerator. Note When consuming from hdfs then in normal mode, a file is split into chunks, producing a message per chunk. You can configure the size of the chunk using the chunkSize option. If you want to read from hdfs and write to a regular file using the file component, then you can use the fileMode=Append to append each of the chunks together. 144.2. Options The HDFS component supports 2 options, which are listed below. Name Description Default Type jAASConfiguration (common) To use the given configuration for security with JAAS. Configuration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The HDFS endpoint is configured using URI syntax: with the following path and query parameters: 144.2.1. Path Parameters (3 parameters): Name Description Default Type hostName Required HDFS host to use String port HDFS port to use 8020 int path Required The directory path to use String 144.2.2. Query Parameters (38 parameters): Name Description Default Type connectOnStartup (common) Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection, as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup, and not block for up till 15 minutes. true boolean fileSystemType (common) Set to LOCAL to not use HDFS but local java.io.File instead. HDFS HdfsFileSystemType fileType (common) The file type to use. For more details see Hadoop HDFS documentation about the various files types. NORMAL_FILE HdfsFileType keyType (common) The type for the key in case of sequence or map files. NULL WritableType owner (common) The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped. String valueType (common) The type for the key in case of sequence or map files BYTES WritableType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delay (consumer) The interval (milliseconds) between the directory scans. 1000 long initialDelay (consumer) For the consumer, how much to wait (milliseconds) before to start scanning the directory. long pattern (consumer) The pattern used for scanning the directory * String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy append (producer) Append to existing file. Notice that not all HDFS file systems support the append option. false boolean overwrite (producer) Whether to overwrite existing files with the same name true boolean blockSize (advanced) The size of the HDFS blocks 67108864 long bufferSize (advanced) The buffer size used by HDFS 4096 int checkIdleInterval (advanced) How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE. 500 int chunkSize (advanced) When reading a normal file, this is split into chunks producing a message per chunk. 4096 int compressionCodec (advanced) The compression codec to use DEFAULT HdfsCompressionCodec compressionType (advanced) The compression type to use (is default not in use) NONE CompressionType openedSuffix (advanced) When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase. opened String readSuffix (advanced) Once the file has been read is renamed with this suffix to avoid to read it again. read String replication (advanced) The HDFS replication factor 3 short splitStrategy (advanced) In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met, a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:value,ST:value,... where ST can be: BYTES a new file is created, and the old is closed when the number of written bytes is more than value MESSAGES a new file is created, and the old is closed when the number of written messages is more than value IDLE a new file is created, and the old is closed when no writing happened in the last value milliseconds String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 144.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.hdfs.enabled Enable hdfs component true Boolean camel.component.hdfs.j-a-a-s-configuration To use the given configuration for security with JAAS. The option is a javax.security.auth.login.Configuration type. String camel.component.hdfs.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 144.3.1. KeyType and ValueType NULL it means that the key or the value is absent BYTE for writing a byte, the java Byte class is mapped into a BYTE BYTES for writing a sequence of bytes. It maps the java ByteBuffer class INT for writing java integer FLOAT for writing java float LONG for writing java long DOUBLE for writing java double TEXT for writing java strings BYTES is also used with everything else, for example, in Camel a file is sent around as an InputStream, int this case is written in a sequence file or a map file as a sequence of bytes. 144.4. Splitting Strategy In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured UuidGenerator Every time a splitting condition is met, a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=<ST>:<value>,<ST>:<value>,* where <ST> can be: BYTES a new file is created, and the old is closed when the number of written bytes is more than <value> MESSAGES a new file is created, and the old is closed when the number of written messages is more than <value> IDLE a new file is created, and the old is closed when no writing happened in the last <value> milliseconds Note note that this strategy currently requires either setting an IDLE value or setting the HdfsConstants.HDFS_CLOSE header to false to use the BYTES/MESSAGES configuration... otherwise, the file will be closed with each message for example: hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5 it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running hadoop fs -ls /tmp/simple-file you'll see that multiple files have been created. 144.5. Message Headers The following headers are supported by this component: 144.5.1. Producer only Header Description CamelFileName Camel 2.13: Specifies the name of the file to write (relative to the endpoint path). The name can be a String or an Expression object. Only relevant when not using a split strategy. 144.6. Controlling to close file stream Available as of Camel 2.10.4 When using the HDFS producer without a split strategy, then the file output stream is by default closed after the write. However you may want to keep the stream open, and only explicitly close the stream later. For that you can use the header HdfsConstants.HDFS_CLOSE (value = "CamelHdfsClose" ) to control this. Setting this value to a boolean allows you to explicit control whether the stream should be closed or not. Notice this does not apply if you use a split strategy, as there are various strategies that can control when the stream is closed. 144.7. Using this component in OSGi This component is fully functional in an OSGi environment, however, it requires some actions from the user. Hadoop uses the thread context class loader in order to load resources. Usually, the thread context classloader will be the bundle class loader of the bundle that contains the routes. So, the default configuration files need to be visible from the bundle class loader. A typical way to deal with it is to keep a copy of core-default.xml in your bundle root. That file can be found in the hadoop-common.jar.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hdfs</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "hdfs://hostname[:port][/path][?options]", "hdfs:hostName:port/path", "hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hdfs-component
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_microsoft_azure/making-open-source-more-inclusive
Chapter 4. Mounting NFS shares
Chapter 4. Mounting NFS shares As a system administrator, you can mount remote NFS shares on your system to access shared data. 4.1. Services required on an NFS client Red Hat Enterprise Linux uses a combination of a kernel module and user-space processes to provide NFS file shares: Table 4.1. Services required on an NFS client Service name NFS version Description rpc.idmapd 4 This process provides NFSv4 client and server upcalls, which map between NFSv4 names (strings in the form of user@domain ) and local user and group IDs. rpc.statd 3 This service provides notification to other NFSv3 clients when the local host reboots, and to the kernel when a remote NFSv3 host reboots. Additional resources rpc.idmapd(8) , rpc.statd(8) man pages on your system 4.2. Preparing an NFSv3 client to run behind a firewall An NFS server notifies clients about file locks and the server status. To establish a connection back to the client, you must open the relevant ports in the firewall on the client. Procedure By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the /etc/nfs.conf file: In the [lockd] section, set a fixed port number for the nlockmgr RPC service, for example: With this setting, the service automatically uses this port number for both the UDP and TCP protocol. In the [statd] section, set a fixed port number for the rpc.statd service, for example: With this setting, the service automatically uses this port number for both the UDP and TCP protocol. Open the relevant ports in firewalld : Restart the rpc-statd service: 4.3. Preparing an NFSv4 client to run behind a firewall An NFS server notifies clients about file locks and the server status. To establish a connection back to the client, you must open the relevant ports in the firewall on the client. Note NFS v4.1 and later uses the pre-existing client port for callbacks, so the callback port cannot be set separately. For more information, see the How do I set the NFS4 client callback port to a specific port? solution. Prerequisites The server uses the NFS 4.0 protocol. Procedure Open the relevant ports in firewalld : 4.4. Manually mounting an NFS share If you do not require that a NFS share is automatically mounted at boot time, you can manually mount it. Warning You can experience conflicts in your NFSv4 clientid and their sudden expiration if your NFS clients have the same short hostname. To avoid any possible sudden expiration of your NFSv4 clientid , you must use either unique hostnames for NFS clients or configure identifier on each container, depending on what system you are using. For more information, see the Red Hat Knowledgebase solution NFSv4 clientid was expired suddenly due to use same hostname on several NFS clients . Procedure Use the following command to mount an NFS share on a client: For example, to mount the /nfs/projects share from the server.example.com NFS server to /mnt , enter: Verification As a user who has permissions to access the NFS share, display the content of the mounted share: 4.5. Mounting an NFS share automatically when the system boots Automatic mounting of an NFS share during system boot ensures that critical services reliant on centralized data, such as /home directories hosted on the NFS server, have seamless and uninterrupted access from the moment the system starts up. Procedure Edit the /etc/fstab file and add a line for the share that you want to mount: For example, to mount the /nfs/home share from the server.example.com NFS server to /home , enter: Mount the share: Verification As a user who has permissions to access the NFS share, display the content of the mounted share: Additional resources fstab(5) man page on your system 4.6. Connecting NFS mounts in the web console Connect a remote directory to your file system using NFS. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. NFS server name or the IP address. Path to the directory on the remote server. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button. From the drop-down menu, select New NFS mount . In the New NFS Mount dialog box, enter the server or IP address of the remote server. In the Path on Server field, enter the path to the directory that you want to mount. In the Local Mount Point field, enter the path to the directory on your local system where you want to mount the NFS. In the Mount options check box list, select how you want to mount the NFS. You can select multiple options depending on your requirements. Check the Mount at boot box if you want the directory to be reachable even after you restart the local system. Check the Mount read only box if you do not want to change the content of the NFS. Check the Custom mount options box and add the mount options if you want to change the default mount option. Click Add . Verification Open the mounted directory and verify that the content is accessible. 4.7. Customizing NFS mount options in the web console Edit an existing NFS mount and add custom mount options. Custom mount options can help you to troubleshoot the connection or change parameters of the NFS mount such as changing timeout limits or configuring authentication. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. An NFS mount is added to your system. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the NFS mount you want to adjust. If the remote directory is mounted, click Unmount . You must unmount the directory during the custom mount options configuration. Otherwise, the web console does not save the configuration and this causes an error. Click Edit . In the NFS Mount dialog box, select Custom mount option . Enter mount options separated by a comma. For example: nfsvers=4 : The NFS protocol version number soft : The type of recovery after an NFS request times out sec=krb5 : The files on the NFS server can be secured by Kerberos authentication. Both the NFS client and server have to support Kerberos authentication. For a complete list of the NFS mount options, enter man nfs in the command line. Click Apply . Click Mount . Verification Open the mounted directory and verify that the content is accessible. 4.8. Setting up an NFS client with Kerberos in a Red Hat Enterprise Linux Identity Management domain If the NFS server uses Kerberos and is enrolled in an Red Hat Enterprise Linux Identity Management (IdM) domain, your client must also be a member of the domain to be able to mount the shares. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption. Prerequisites The NFS client is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. The exported NFS share uses Kerberos. Procedure Obtain a kerberos ticket as an IdM administrator: Retrieve the host principal, and store it in the /etc/krb5.keytab file: IdM automatically created the host principal when you joined the host to the IdM domain. Optional: Display the principals in the /etc/krb5.keytab file: Use the ipa-client-automount utility to configure mapping of IdM IDs: Mount an exported NFS share, for example: The -o sec option specifies the Kerberos security method. Verification Log in as an IdM user who has permissions to write on the mounted share. Obtain a Kerberos ticket: Create a file on the share, for example: List the directory to verify that the file was created: Additional resources The AUTH_GSS authentication method 4.9. Configuring GNOME to store user settings on home directories hosted on an NFS share If you use GNOME on a system with home directories hosted on an NFS server, you must change the keyfile backend of the dconf database. Otherwise, dconf might not work correctly. This change affects all users on the host because it changes how dconf manages user settings and configurations stored in the home directories. Note that the dconf keyfile backend only works if the glib2-fam package is installed. Without this package, notifications on configuration changes made on remote machines are not displayed properly. With Red Hat Enterprise Linux 8, glib2-fam package is available in the BaseOS repository. Prerequisites The glib2-fam package is installed: Procedure Add the following line to the beginning of the /etc/dconf/profile/user file. If the file does not exist, create it. With this setting, dconf polls the keyfile back end to determine whether updates have been made, so settings might not be updated immediately. The changes take effect when the users logs out and in. 4.10. Frequently used NFS mount options The following are the commonly-used options when mounting NFS shares. You can use these options with mount commands, in /etc/fstab settings, and the autofs automapper. lookupcache= mode Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all , none , or positive . nfsvers= version Specifies which version of the NFS protocol to use, where version is 3 , 4 , 4.0 , 4.1 , or 4.2 . This is useful for hosts that run multiple NFS servers, or to disable retrying a mount with lower versions. If no version is specified, the client tries version 4.2 first, then negotiates down until it finds a version supported by the server. The option vers is identical to nfsvers , and is included in this release for compatibility reasons. noacl Turns off all ACL processing. This can be needed when interfacing with old Red Hat Enterprise Linux versions that are not compatible with the recent ACL technology. nolock Disables file locking. This setting can be required when you connect to very old NFS servers. noexec Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries. nosuid Disables the set-user-identifier and set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. retrans= num The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each UDP request three times and each TCP request twice. timeo= num The time in tenths of a second the NFS client waits for a response before it retries an NFS request. For NFS over TCP, the default timeo value is 600 (60 seconds). The NFS client performs linear backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600 seconds. port= num Specifies the numeric value of the NFS server port. For NFSv3, if num is 0 (the default value), or not specified, then mount queries the rpcbind service on the remote host for the port number to use. For NFSv4, if num is 0 , then mount queries the rpcbind service, but if it is not specified, the standard NFS port number of TCP 2049 is used instead and the remote rpcbind is not checked anymore. rsize= num and wsize= num These options set the maximum number of bytes to be transferred in a single NFS read or write operation. There is no fixed default value for rsize and wsize . By default, NFS uses the largest possible value that both the server and the client support. In Red Hat Enterprise Linux 9, the client and server maximum is 1,048,576 bytes. For more information, see the Red Hat Knowledgebase solution What are the default and maximum values for rsize and wsize with NFS mounts? . sec= options Security options to use for accessing files on the mounted export. The options value is a colon-separated list of one or more security options. By default, the client attempts to find a security option that both the client and the server support. If the server does not support any of the selected options, the mount operation fails. Available options: sec=sys uses local UNIX UIDs and GIDs. These use AUTH_SYS to authenticate NFS operations. sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users. sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering. sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead. Additional resources mount(8) and `nfs(5)`man pages on your system 4.11. Enabling client-side caching of NFS content FS-Cache is a persistent local cache on the client that file systems can use to take data retrieved from over the network and cache it on the local disk. This helps to minimize network traffic. 4.11.1. How NFS caching works The following diagram is a high-level illustration of how FS-Cache works: FS-Cache is designed to be as transparent as possible to the users and administrators of a system. FS-Cache allows a file system on a server to interact directly with a client's local cache without creating an over-mounted file system. With NFS, a mount option instructs the client to mount the NFS share with FS-cache enabled. The mount point will cause automatic upload for two kernel modules: fscache and cachefiles . The cachefilesd daemon communicates with the kernel modules to implement the cache. FS-Cache does not alter the basic operation of a file system that works over the network. It merely provides that file system with a persistent place in which it can cache data. For example, a client can still mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that will not fit into the cache (whether individually or collectively) as files can be partially cached and do not have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the client file system driver. To provide caching services, FS-Cache needs a cache back end, the cachefiles service. FS-Cache requires a mounted block-based file system, that supports block mapping ( bmap ) and extended attributes as its cache back end: XFS ext3 ext4 FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared file system's driver must be altered to allow interaction with FS-Cache, data storage or retrieval, and metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached file system to support persistence: indexing keys to match file system objects to cache objects, and coherency data to determine whether the cache objects are still valid. Using FS-Cache is a compromise between various factors. If FS-Cache is being used to cache NFS traffic, it may slow the client down, but can massively reduce the network and server loading by satisfying read requests locally without consuming network bandwidth. 4.11.2. Installing and configuring the cachefilesd service Red Hat Enterprise Linux provides only the cachefiles caching back end. The cachefilesd service initiates and manages cachefiles . The /etc/cachefilesd.conf file controls how cachefiles provides caching services. Prerequisites The file system mounted under the /var/cache/fscache/ directory is ext3 , ext4 , or xfs . The file system mounted under /var/cache/fscache/ uses extended attributes, which is the default if you created the file system on RHEL 8 or later. Procedure Install the cachefilesd package: Enable and start the cachefilesd service: Verification Mount an NFS share with the fsc option to use the cache: To mount a share temporarily, enter: To mount a share permanently, add the fsc option to the entry in the /etc/fstab file: Display the FS-cache statistics: Additional resources /usr/share/doc/cachefilesd/README file /usr/share/doc/kernel-doc-<kernel_version>/Documentation/filesystems/caching/fscache.txt provided by the kernel-doc package 4.11.3. Sharing NFS cache Because the cache is persistent, blocks of data in the cache are indexed on a sequence of four keys: Level 1: Server details Level 2: Some mount options; security type; FSID; a uniquifier string Level 3: File Handle Level 4: Page number in file To avoid coherency management problems between superblocks, all NFS superblocks that require to cache the data have unique level 2 keys. Normally, two NFS mounts with the same source volume and options share a superblock, and therefore share the caching, even if they mount different directories within that volume. Example 4.1. NFS cache sharing: The following two mounts likely share the superblock as they have the same mount options, especially if because they come from the same partition on the NFS server: If the mount options are different, they do not share the superblock: Note The user can not share caches between superblocks that have different communications or protocol parameters. For example, it is not possible to share caches between NFSv4.0 and NFSv3 or between NFSv4.1 and NFSv4.2 because they force different superblocks. Also setting parameters, such as the read size ( rsize ), prevents cache sharing because, again, it forces a different superblock. 4.11.4. NFS cache limitations There are some cache limitations with NFS: Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is because this type of access must be direct to the server. Opening a file from a shared file system for either direct I/O or writing flushes the cached copy of the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing. Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache directories, symlinks, device files, FIFOs, and sockets. 4.11.5. How cache culling works The cachefilesd service works by caching remote data from shared file systems to free space on the local disk. This could potentially consume all available free space, which could cause problems if the disk also contains the root partition. To control this, cachefilesd tries to maintain a certain amount of free space by discarding old objects, such as less-recently accessed objects, from the cache. This behavior is known as cache culling. Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the underlying file system. There are settings in /etc/cachefilesd.conf which control six limits: brun N% (percentage of blocks), frun N% (percentage of files) If the amount of free space and the number of available files in the cache rises above both these limits, then culling is turned off. bcull N% (percentage of blocks), fcull N% (percentage of files) If the amount of available space or the number of files in the cache falls below either of these limits, then culling is started. bstop N% (percentage of blocks), fstop N% (percentage of files) If the amount of available space or the number of available files in the cache falls below either of these limits, then no further allocation of disk space or files is permitted until culling has raised things above these limits again. The default value of N for each setting is as follows: brun/frun : 10% bcull/fcull : 7% bstop/fstop : 3% When configuring these settings, the following must hold true: 0 <= bstop < bcull < brun < 100 0 <= fstop < fcull < frun < 100 These are the percentages of available space and available files and do not appear as 100 minus the percentage displayed by the df program. Important Culling depends on both b xxx and f xxx pairs simultaneously; the user can not treat them separately.
[ "port= 5555", "port= 6666", "firewall-cmd --permanent --add-service=rpc-bind firewall-cmd --permanent --add-port={ 5555 /tcp, 5555 /udp, 6666 /tcp, 6666 /udp} firewall-cmd --reload", "systemctl restart rpc-statd nfs-server", "firewall-cmd --permanent --add-port= <callback_port> /tcp firewall-cmd --reload", "mount <nfs_server_ip_or_hostname> :/ <exported_share> <mount point>", "mount server.example.com:/nfs/projects/ /mnt/", "ls -l /mnt/", "<nfs_server_ip_or_hostname>:/<exported_share> <mount point> nfs default 0 0", "server.example.com:/nfs/projects /home nfs defaults 0 0", "mount /home", "ls -l /mnt/", "kinit admin", "ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_client.idm.example.com -k /etc/krb5.keytab", "klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 6 host/[email protected] 6 host/[email protected] 6 host/[email protected] 6 host/[email protected]", "ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs", "mount -o sec=krb5i server.idm.example.com:/nfs/projects/ /mnt/", "kinit", "touch /mnt/test.txt", "ls -l /mnt/test.txt -rw-r--r--. 1 admin users 0 Feb 15 11:54 /mnt/test.txt", "yum install glib2-fam", "service-db:keyfile/user", "dnf install cachefilesd", "systemctl enable --now cachefilesd", "mount -o fsc server.example.com:/nfs/projects/ /mnt/", "<nfs_server_ip_or_hostname>:/<exported_share> <mount point> nfs fsc 0 0", "cat /proc/fs/fscache/stats", "mount -o fsc home0:/nfs/projects /projects mount -o fsc home0:/nfs/home /home/", "mount -o fsc,rsize=8192 home0:/nfs/projects /projects mount -o fsc,rsize=65536 home0:/nfs/home /home/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/mounting-nfs-shares_managing-file-systems
7.3. Source Supported Functions
7.3. Source Supported Functions While Red Hat JBoss Data Virtualization provides an extensive scalar function library, it contains only those functions that can be evaluated within the query engine. In many circumstances, especially for performance, a user defined function allows for calling a source specific function. For example, suppose you want to use the Oracle-specific functions score and contains: SELECT score(1), ID, FREEDATA FROM Docs WHERE contains(freedata, 'nick', 1) > 0 The score and contains functions are not part of built-in scalar function library. While you could write your own custom scalar function to mimic their behavior, it is more likely that you would want to use the actual Oracle functions that are provided by Oracle when using the Oracle Free Text functionality. In order to configure Red Hat JBoss Data Virtualization to push the above function evaluation to Oracle, you can either: extend the translator in Java, define the function as a pushdown function via Teiid Designer, or, for dynamic VDBs, define it in the VDB. 7.3.1. Defining a Source Supported Function by Extending the Translator The ExecutionFactory.getPushdownFunctions method can be used to describe functions that are valid against all instances of a given translator type. The function names are expected to be prefixed by the translator type, or some other logical grouping, e.g. salesforce.includes. The full name of the function once imported into the system will qualified by the SYS schema, e.g. SYS.salesforce.includes. Any functions added via these mechanisms do not need to be declared in ExecutionFactory.getSupportedFunctions. Any of the additional handling, such as adding a FunctionModifier, covered above is also applicable here. All pushdown functions will have function name set to only the simple name. Schema or other qualification will be removed. Handling, such as function modifiers, can check the function metadata if there is the potential for an ambiguity. To extend the Oracle Connector: Required - extend the OracleExecutionFactory and add SCORE and CONTAINS as supported pushdown functions by either overriding or adding additional functions in "getPushDownFunctions" method. For this example, we'll call the class MyOracleExecutionFactory. Add the org.teiid.translator.Translator annotation to the class, e.g. @Translator(name="myoracle") Optionally register new FunctionModifiers on the start of the ExecutionFactory to handle translation of these functions. Given that the syntax of these functions is same as other typical functions, this probably is not needed - the default translation should work. Create a new translator JAR containing your custom ExecutionFactory. Once this extended translator is deployed in Red Hat JBoss Data Virtualization, use "myoracle" as translator name instead of the "oracle" in your VDB's Oracle source configuration. 7.3.2. Defining a Source Supported Function via Teiid Designer If you are designing your VDB using Teiid Designer, you can define a function on any "source" model, and that function is automatically added as pushdown function when the VDB is deployed. There is no additional need for adding Java code. Note The function will be visible only for that VDB; whereas, if you extend the translator, the functions can be used by any VDB. 7.3.3. Defining a Source Supported Function Using Dynamic VDBs If you are using the Dynamic VDB, and defining the metadata using DDL, you can define your source function in the VDB as By default, in the Dynamic VDBs, metadata for the Source models is automatically retrieved from the source if they were JDBC, File, WebService. The File and WebService sources are static, so one can not add additional metadata on them. However on the JDBC sources you can retrieve the metadata from source and then user can append additional metadata on top of them. For example The above example uses NATIVE metadata type (NATIVE is the default for source/physical models) first to retrieve schema information from source, then uses DDL metadata type to add additional metadata. Only metadata not available via the NATIVE translator logic would need to be specified via DDL. Alternatively, if you are using custom MetadataRepository with your VDB, then provide the "function" metadata directly from your implementation. ex. In the above example, user can implement MetadataRepository interface and package the implementation class along with its dependencies in a JBoss EAP module and supply the module name in the above XML. For more information on how to write a Metadata Repository refer to the section on Custom Metadata Repository.
[ "SELECT score(1), ID, FREEDATA FROM Docs WHERE contains(freedata, 'nick', 1) > 0", "<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"PHYSICAL\"> <source name=\"AccountsDB\" translator-name=\"oracle\" connection-jndi-name=\"java:/oracleDS\"/> <metadata type=\"DDL\"><![CDATA[ CREATE FOREIGN FUNCTION SCORE (val integer) RETURNS integer; .... (other tables, procedures etc) ]]> </metadata> </model> </vdb>", "<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"PHYSICAL\"> <source name=\"AccountsDB\" translator-name=\"oracle\" connection-jndi-name=\"java:/oracleDS\"/> <metadata type=\"NATIVE,DDL\"><![CDATA[ CREATE FOREIGN FUNCTION SCORE (val integer) RETURNS integer; ]]> </metadata> </model> </vdb>", "<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"PHYSICAL\"> <source name=\"AccountsDB\" translator-name=\"oracle\" connection-jndi-name=\"java:/oracleDS\"/> <metadata type=\"{metadata-repo-module}\"></metadata> </model> </vdb>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/sect-source_supported_functions
Chapter 3. Deploying applications with OpenShift Client
Chapter 3. Deploying applications with OpenShift Client You can use OpenShift Client (oc) for application deployment. You can deploy applications from source or from binary artifacts. 3.1. Deploying applications from source using oc The following example demonstrates how to deploy the example-app application using oc , which is in the app folder on the dotnet-8.0 branch of the redhat-developer/s2i-dotnetcore-ex GitHub repository: Procedure Create a new OpenShift project: Add the ASP.NET Core application: Track the progress of the build: View the deployed application once the build is finished: The application is now accessible within the project. Optional : Make the project accessible externally: Obtain the shareable URL: 3.2. Deploying applications from binary artifacts using oc You can use .NET Source-to-Image (S2I) builder image to build applications using binary artifacts that you provide. Prerequisites Published application. For more information, see Procedure Create a new binary build: Start the build and specify the path to the binary artifacts on your local machine: Create a new application:
[ "oc new-project sample-project", "oc new-app --name= example-app 'dotnet:8.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#dotnet-8.0' --build-env DOTNET_STARTUP_PROJECT=app", "oc logs -f bc/ example-app", "oc logs -f dc/ example-app", "oc expose svc/ example-app", "oc get routes", "oc new-build --name= my-web-app dotnet:8.0-ubi8 --binary=true", "oc start-build my-web-app --from-dir= bin/Release/net8.0/publish", "oc new-app my-web-app" ]
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_openshift_container_platform/assembly_dotnet-deploying-apps_getting-started-with-dotnet-on-openshift
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/proc_providing-feedback-on-red-hat-documentation
Chapter 2. Major Changes and Migration Considerations
Chapter 2. Major Changes and Migration Considerations This chapter discusses major changes and features that may affect migration from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. Read each section carefully for a clear understanding of how your system will be impacted by upgrading to Red Hat Enterprise Linux 7. 2.1. System Limitations Red Hat Enterprise Linux supported system limitations have changed between version 6 and version 7. Red Hat Enterprise Linux 7 now requires at least 1 GB of disk space to install. However, Red Hat recommends a minimum of 5 GB of disk space for all supported architectures. AMD64 and Intel 64 systems now require at least 1 GB of memory to run. Red Hat recommends at least 1 GB memory per logical CPU. AMD64 and Intel 64 systems are supported up to the following limits: at most 3 TB memory (theoretical limit: 64 TB) at most 160 logical CPUs (theoretical limit: 5120 logical CPUs) 64-bit Power systems now require at least 2 GB of memory to run. They are supported up to the following limits: at most 2 TB memory (theoretical limit: 64 TB) at most 128 logical CPUs (theoretical limit: 2048 logical CPUs) IBM System z systems now require at least 1 GB of memory to run, and are theoretically capable of supporting up to the following limits: at most 3 TB memory at most 101 logical CPUs The most up to date information about Red Hat Enterprise Linux 7 requirements and limitations is available online at https://access.redhat.com/site/articles/rhel-limits . To check whether your hardware or software is certified, see https://access.redhat.com/certifications . 2.2. Installation and Boot Read this section for a summary of changes made to installation tools and processes between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. 2.2.1. New Boot Loader Red Hat Enterprise Linux 7 introduces the GRUB2 boot loader, which replaces legacy GRUB in Red Hat Enterprise Linux 7.0 and later. GRUB2 supports more file systems and virtual block devices than its predecessor. It automatically scans for and configures available operating systems. The user interface has also been improved, and users have the option to skip boot loader installation. However, the move to GRUB2 also removes support for installing the boot loader to a formatted partition on BIOS machines with MBR-style partition tables. This behavior change was made because some file systems have automated optimization features that move parts of the core boot loader image, which could break the GRUB legacy boot loader. With GRUB2, the boot loader is installed in the space available between the partition table and the first partition on BIOS machines with MBR (Master Boot Record) style partition tables. BIOS machines with GPT (GUID Partition Table) style partition tables must create a special BIOS Boot Partition for the boot loader. UEFI machines continue to install the boot loader to the EFI System Partition. The recommended minimum partition sizes have also changed as a result of the new boot loader. Table 2.1, "Recommended minimum partition sizes" gives a summary of the new recommendations. Further information is available in MBR and GPT Considerations . Table 2.1. Recommended minimum partition sizes Partition BIOS & MBR BIOS & GPT UEFI & GPT /boot 500 MB / 10 GB swap At least twice the RAM. See Recommended Partitioning Scheme for details. boot loader N/A (Installed between the partition table and the first partition) Users can install GRUB2 to a formatted partition manually with the force option at the risk of causing file system damage, or use an alternative boot loader. For a list of alternative boot loaders, see the Installation Guide . If you have a dual-boot system, use GRUB2's operating system detection to automatically write a configuration file that can boot either operating system: Important Note, that if you have a dual-boot that is based on using UEFI uses other mechanism than MBR legacy based one. This means that you do not need to use EFI specific grub2 command: # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 2.2.1.1. Default Boot Entry for Debugging Default boot entry for systemd has been added to the /etc/grub.cfg file. It is no longer necessary to enable debugging manually. The default boot entry allows you to debug systems without affecting options at the boot time. 2.2.2. New Init System systemd is the system and service manager that replaces the SysV init system used in releases of Red Hat Enterprise Linux. systemd is the first process to start during boot, and the last process to terminate at shutdown. It coordinates the remainder of the boot process and configures the system for the user. Under systemd , interdependent programs can load in parallel, making the boot process considerably faster. systemd is largely compatible with SysV in
[ "grub2-mkconfig -o /boot/grub2/grub.cfg", "man systemd-ask-password", "rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000", "rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portname=foo rd.znet=ctc,0.0.0600,0.0.0601,protocol=bar", "rd.driver.blacklist=mod1,mod2,", "rd.driver.blacklist=firewire_ohci", "/dev/critical /critical xfs defaults 1 2 /dev/optional /optional xfs defaults,nofail 1 2", "mv -f /var/run /var/run.runmove~ ln -sfn ../run /var/run mv -f /var/lock /var/lock.lockmove~ ln -sfn ../run/lock /var/lock", "find /usr/{lib,lib64,bin,sbin} -name '.usrmove'", "dmesg journalctl -ab --full", "systemctl enable tmp.mount", "systemctl disable tmp.mount", "AUTOCREATE_SERVER_KEYS=YES export SSH_USE_STRONG_RNG=1 export OPENSSL_DISABLE_AES_NI=1", "AUTOCREATE_SERVER_KEYS=YES SSH_USE_STRONG_RNG=1 OPENSSL_DISABLE_AES_NI=1", "man yum", "mount -o acl /dev/loop0 test mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.", "part /mnt/example --fstype=xfs", "btrfs mount_point --data= level --metadata= level --label= label partitions", "/dev/essential-disk /essential xfs auto,defaults 0 0 /dev/non-essential-disk /non-essential xfs auto,defaults,nofail 0 0", "udev-alias: sdb (disk/by-id/ata-QEMU_HARDDISK_QM00001)", "man ncat", "undisclosed_recipients_header = To: undisclosed-recipients:;", "postscreen_dnsbl_reply_map = texthash:/etc/postfix/dnsbl_reply", "Secret DNSBL name Name in postscreen(8) replies secret.zen.spamhaus.org zen.spamhaus.org", "man rpc.nfsd", "man nfs", "man nfsmount.conf", "systemctl start named-chroot.service", "systemctl stop named-chroot.service", "man keepalived.conf", "man 5 votequorum", "firewall-offline-cmd", "yum update -y opencryptoki", "pkcsconf -s pkcsconf -t", "systemctl stop pkcsslotd.service", "ps ax | grep pkcsslotd", "cp -r /var/lib/opencryptoki/ccatok /var/lib/opencryptoki/ccatok.backup", "cd /var/lib/opencryptoki/ccatok pkcscca -m v2objectsv3 -v", "rm /dev/shm/var.lib.opencryptoki.ccatok", "systemctl start pkcsslotd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-major_changes_and_migration_considerations
Chapter 13. Logging in to the Directory Server by using the web console
Chapter 13. Logging in to the Directory Server by using the web console The web console is a browser-based graphical user interface (GUI) that you can use for performing administrative tasks. The Directory Server package automatically installs the Directory Server user interface for the web console. Prerequisites You have permissions to access the web console. Procedure Access the web console by using the following URL in your browser: Log in as a user with sudo privileges. Select the Red Hat Directory Server entry. Additional resources Logging in to the RHEL web console .
[ "https://<directory_server_host>:9090" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/proc_logging-in-to-the-ds-web-console_installing-rhds
20.43. Setting Schedule Parameters
20.43. Setting Schedule Parameters The virsh schedinfo command modifies host scheduling parameters of the virtual machine process on the host machine. The following command format should be used: Each parameter is explained below: domain - the guest virtual machine domain --set - the string placed here is the controller or action that is to be called. The string uses the parameter = value format. Additional parameters or values if required should be added as well. --current - when used with --set , will use the specified set string as the current scheduler information. When used without will display the current scheduler information. --config - - when used with --set , will use the specified set string on the reboot. When used without will display the scheduler information that is saved in the configuration file. --live - when used with --set , will use the specified set string on a guest virtual machine that is currently running. When used without will display the configuration setting currently used by the running virtual machine The scheduler can be set with any of the following parameters: cpu_shares , vcpu_period and vcpu_quota . These parameters are applied to the vCPU threads. The following shows how the parameters map to cgroup field names: cpu_shares :cpu.shares vcpu_period :cpu.cfs_period_us vcpu_quota :cpu.cfs_quota_us Example 20.98. schedinfo show This example shows the shell guest virtual machine's schedule information Example 20.99. schedinfo set In this example, the cpu_shares is changed to 2046. This effects the current state and not the configuration file. libvirt also supports the emulator_period and emulator_quota parameters that modify the setting of the emulator process.
[ "virsh schedinfo domain --set --current --config --live", "virsh schedinfo shell Scheduler : posix cpu_shares : 1024 vcpu_period : 100000 vcpu_quota : -1", "virsh schedinfo --set cpu_shares=2046 shell Scheduler : posix cpu_shares : 2046 vcpu_period : 100000 vcpu_quota : -1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Setting_schedule_parameters
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_guide/proc_providing-feedback-on-red-hat-documentation_default
Preface
Preface Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/users_and_identity_management_guide/pr01
Chapter 10. Using NVMe with LVM Optimally
Chapter 10. Using NVMe with LVM Optimally Summary The procedures below demonstrate how to deploy Ceph for Object Gateway usage optimally when using high speed NVMe based SSDs (this applies to SATA SSDs too). Journals and bucket indexes will be placed together on high speed storage devices, which can increase performance compared to having all journals on one device. This configuration requires setting osd_scenario to lvm . Procedures for two example configurations are provided: One NVMe device and at least four HDDs using one bucket index: One NVMe device Two NVMe devices and at least four HDDs using two bucket indexes: Two NVMe devices Details The most basic Ceph setup uses the osd_scenario setting of collocated . This stores the OSD data and its journal on one storage device together (they are "co-located"). Typical server configurations include both HDDs and SSDs. Since HDDs are usually larger than SSDs, in a collocated configuration to utitlize the most storage space an HDD would be chosen, putting both the OSD data and journal on it alone. However, the journal should ideally be on a faster SSD. Another option is using the osd_scenario setting of non-collocated . This allows configuration of dedicated devices for journals, so you can put the OSD data on HDDs and the journals on SSDs. In addition to OSD data and journals, when using Object Gateway a bucket index needs to be stored on a device. In this case Ceph is often configured so that HDDs hold the OSD data, one SSD holds the journals, and another SSD holds the bucket indexes. This can create highly imbalanced situations where the SSD with all the journals becomes saturated while the SSD with bucket indexes is underutilized. The solution is to set osd_scenario to lvm and use Logical Volume Manager (LVM) to divide up single SSD devices for more than one purpose. This allows journals and bucket indexes to exist side by side on a single device. Most importantly, it allows journals to exist on more than one SSD, spreading the intense IO data transfer of the journals across more than one device. The normal Ansible playbooks provided by the ceph-ansible RPM used to install Ceph (site.yml, osds.yml, etc.) don't support using one device for more than one purpose. In the future the normal Ansible playbooks will support using one device for more than one purpose. In the meantime the playbooks lv-create.yml and lv-vars.yaml are being provided to facilitate creating the required Logicial Volumes (LVs) for optimal SSD usage. After lv-create.yml is run site.yml can be run normally and it will use the newly created LVs. Important These procedures only apply to the FileStore storage backend, not the newer BlueStore storage backend. 10.1. Using One NVMe Device Follow this procedure to deploy Ceph for Object Gateway usage with one NVMe device. 10.1.1. Purge Any Existing Ceph Cluster If Ceph is already configured, purge it in order to start over. The Ansible playbook, purge-cluster.yml , is provided for this purpose. For more information on how to use purge-cluster.yml see Purging a Ceph Cluster by Using Ansible in the Installation Guide . Important Purging the cluster may not be enough to prepare the servers for redeploying Ceph using the following procedures. Any file system, GPT, RAID, or other signatures on storage devices used by Ceph may cause problems. Instructions to remove any signatures using wipefs are provided under Run The lv-create.yml Ansible Playbook . 10.1.2. Configure The Cluster for Normal Installation Setting aside any NVMe and/or LVM considerations, configure the cluster as you would normally but stop before running the Ansible playbook. Afterwards, the cluster installation configuration will be adjusted specifically for optimal NVMe/LVM usage to support the Object Gateway. Only at that time should the Ansible playbook be ran. To configure the storage cluster for normal installation consult the Red Hat Ceph Storage Installation Guide . In particular, complete the steps in Installing a Red Hat Ceph Storage Cluster through Step 9 creating an Ansible log directory. Stop before Step 10, when ansible-playbook site.yml -i hosts is ran. Important Do not run ansible-playbook site.yml -i hosts until all the steps after this and before Install Ceph for NVMe and Verify Success have been completed. 10.1.3. Identify The NVMe and HDD Devices Use lsblk to identify the NVMe and HDD devices connected to the server. Example output from lsblk is listed below: In this example the following raw block devices will be used: NVMe devices /dev/nvme0n1 HDD devices /dev/sdc /dev/sdd /dev/sde /dev/sdf The file lv_vars.yaml configures logical volume creation on the chosen devices. It creates journals on NVMe, an NVMe based bucket index, and HDD based OSDs. The actual creation of logical volumes is initiated by lv-create.yml , which reads lv_vars.yaml . That file should only have one NVMe device referenced in it at a time. For information on using Ceph with two NVMe devices optimally see Using Two NVMe Devices . 10.1.4. Add The Devices to lv_vars.yaml As root , navigate to the /usr/share/ceph-ansible/ directory: Edit the file so it includes the following lines: 10.1.5. Run The lv-create.yml Ansible Playbook The purpose of the lv-create.yml playbook is to create logical volumes for the object gateway bucket index, and journals, on a single NVMe. It does this by using osd_scenario=lvm . The lv-create.yml Ansible playbook makes it easier to configure Ceph in this way by automating some of the complex LVM creation and configuration. Ensure the storage devices are raw Before running lv-create.yml to create the logical volumes on the NVMe devices and HDD devices, ensure there are no file system, GPT, RAID, or other signatures on them. If they are not raw, when you run lv-create.yml it might fail with the following error: Wipe storage device signatures (optional): If the devices have signatures you can use wipefs to erase them. An example of using wipefs to erase the devices is shown below: Run the lv-teardown.yml Ansible playbook: Always run lv-teardown.yml before running lv-create.yml : Run the lv-teardown.yml Ansible playbook: Warning Proceed with caution when running the lv-teardown.yml Ansible script. It destroys data. Ensure you have backups of any important data. Run the lv-create.yml Ansible playbook: Once lv-create.yml completes without error continue to the section to verify it worked properly. 10.1.6. Verify LVM Configuration Review lv-created.log : Once the lv-create.yml Ansible playbook completes successfully, configuration information will be written to lv-created.log . Later this information will be copied into group_vars/osds.yml . Open lv-created.log and look for information similar to the below example: Review LVM configuration Based on the example of one NVMe device and four HDDs the following Logical Volumes (LVs) should be created: One journal LV per HDD placed on NVMe (four LVs on /dev/nvme0n1) One data LV per HDD placed on each HDD (one LV per HDD) One journal LV for bucket index placed on NVMe (one LV on /dev/nvme0n1) One data LV for bucket index placed on NVMe (one LV on /dev/nvme0n1) The LVs can be seen in lsblk and lvscan output. In the example explained above, there should be ten LVs for Ceph. As a rough sanity check you could count the Ceph LVs to make sure there are at least ten, but ideally you would make sure the appropriate LVs were created on the right storage devices (NVMe vs HDD). Example output from lsblk is shown below: Example lvscan output is below: 10.1.7. Edit The osds.yml and all.yml Ansible Playbooks Copy the previously mentioned configuration information from lv-created.log into group_vars/osds.yml under the lvm_volumes: line. Set osd_scenario: to lvm : Set osd_objectstore: filestore in all.yml and osds.yml . The osds.yml file should look similar to this: 10.1.8. Install Ceph for NVMe and Verify Success After configuring Ceph for installation to use NVMe with LVM optimally, install it. Run the site.yml Ansible playbook to install Ceph Verify Ceph is running properly after install completes Example ceph -s output showing Ceph is running properly: Example ceph osd tree output showing Ceph is running properly: Ceph is now set up to use one NVMe device and LVM optimally for the Ceph Object Gateway. 10.2. Using Two NVMe Devices Follow this procedure to deploy Ceph for Object Gateway usage with two NVMe devices. 10.2.1. Purge Any Existing Ceph Cluster If Ceph is already configured, purge it in order to start over. An ansible playbook file named purge-cluster.yml is provided for this purpose. For more information on how to use purge-cluster.yml see Purging a Ceph Cluster by Using Ansible in the Installation Guide . Important Purging the cluster may not be enough to prepare the servers for redeploying Ceph using the following procedures. Any file system, GPT, RAID, or other signatures on storage devices used by Ceph may cause problems. Instructions to remove any signatures using wipefs are provided under Run The lv-create.yml Ansible Playbook . 10.2.2. Configure The Cluster for Normal Installation Setting aside any NVMe and/or LVM considerations, configure the cluster as you would normally but stop before running Ansible playbook. Afterwards, the cluster installation configuration will be adjusted specifically for optimal NVMe/LVM usage to support the Object Gateway. Only at that time should Ansible playbook be ran. To configure the cluster for normal installation consult the Installation Guide . In particular, complete the steps in Installing a Red Hat Ceph Storage Cluster through Step 9 creating an Ansible log directory. Stop before Step 10 when ansible-playbook site.yml -i hosts is ran. Important Do not run ansible-playbook site.yml -i hosts until all the steps after this and before Install Ceph for NVMe and Verify Success have been completed. 10.2.3. Identify The NVMe and HDD Devices Use lsblk to identify the NVMe and HDD devices connected to the server. Example output from lsblk is listed below: In this example the following raw block devices will be used: NVMe devices /dev/nvme0n1 /dev/nvme1n1 HDD devices /dev/sdc /dev/sdd /dev/sde /dev/sdf The file lv_vars.yaml configures logical volume creation on the chosen devices. It creates journals on NVMe, an NVMe based bucket index, and HDD based OSDs. The actual creation of logical volumes is initiated by lv-create.yml , which reads lv_vars.yaml . That file should only have one NVMe device referenced in it at a time. It should also only reference the HDD devices to be associated with that particular NVMe device. For OSDs that contain more than one NVMe device edit lv_vars.yaml for each NVMe and run lv-create.yml repeatedly for each NVMe. This is explained below. In the example this means lv-create.yml will first be run on /dev/nvme0n1 and then again on /dev/nvme1n1 . 10.2.4. Add The Devices to lv_vars.yaml As root , navigate to the /usr/share/ceph-ansible/ directory: As root , copy the lv_vars.yaml Ansible playbook to the current directory: For the first run edit the file so it includes the following lines: The journal size, number of bucket indexes, their sizes and names, and the bucket indexes' journal names can all be adjusted in lv_vars.yaml . See the comments within the file for more information. 10.2.5. Run The lv-create.yml Ansible Playbook The purpose of the lv-create.yml playbook is to create logical volumes for the object gateway bucket index, and journals, on a single NVMe. It does this by using osd_scenario=lvm as opposed to using osd_scenario=non-collocated . The lv-create.yml Ansible playbook makes it easier to configure Ceph in this way by automating some of the complex LVM creation and configuration. As root , copy the lv-create.yml Ansible playbook to the current directory: Ensure the storage devices are raw Before running lv-create.yml to create the logical volumes on the NVMe devices and HDD devices, ensure there are no file system, GPT, RAID, or other signatures on them. If they are not raw, when you run lv-create.yml it may fail with the following error: Wipe storage device signatures (optional) If the devices have signatures you can use wipefs to erase them. An example of using wipefs to erase the devices is shown below: Run the lv-teardown.yml Ansible playbook: Always run lv-teardown.yml before running lv-create.yml : As root , copy the lv-teardown.yml Ansible playbook to the current directory: Run the lv-teardown.yml Ansible playbook: Warning Proceed with caution when running the lv-teardown.yml Ansible script. It destroys data. Ensure you have backups of any important data. Run the lv-create.yml Ansible playbook: 10.2.6. Copy First NVMe LVM Configuration Review lv-created.log Once the lv-create.yml Ansible playbook completes successfully, configuration information will be written to lv-created.log . Open lv-created.log and look for information similar to the below example: Copy this information into group_vars/osds.yml under lvm_volumes: . 10.2.7. Run The lv-create.yml Playbook on NVMe device two The following instructions are abbreviated steps to set up a second NVMe device. Consult the related steps above for further context if needed. Modify lv-vars.yaml to use the second NVMe and associated HDDs. Following the example, lv-vars.yaml will now have the following devices set: Run lv-teardown.yml : Run lv-create.yml again 10.2.8. Copy Second NVMe LVM Configuration Review lv-created.log Once the lv-create.yml Ansible playbook completes successfully, configuration information will be written to lv-created.log . Open lv-created.log and look for information similar to the below example: Copy this information into group_vars/osds.yml under the already entered information under lvm_volumes: . 10.2.9. Verify LVM Configuration Review LVM Configuration Based on the example of two NVMe device and four HDDs the following Logical Volumes (LVs) should be created: One journal LV per HDD placed on both NVMe devices (two LVs on /dev/nvme0n1, two on /dev/nvme1n1) One data LV per HDD placed on each HDD (one LV per HDD) One journal LV per bucket index placed on NVMe (one LV on /dev/nvme0n1, one LV on /dev/nvme1n1) One data LV per bucket index placed on both NVMe devices (one LV on /dev/nvme0n1, one LV on /dev/nvme1n1) The LVs can be seen in lsblk and lvscan output. In the example explained above, there should be twelve LVs for Ceph. As a rough sanity check you could count the Ceph LVs to make sure there are at least twelve, but ideally you would make sure the appropriate LVs were created on the right storage devices (NVMe vs HDD). Example output from lsblk is shown below: Example output from lvscan is shown below: 10.2.10. Edit The osds.yml and all.yml Ansible Playbooks Set osd_objectstore to bluestore In addition to adding the second set of information from lv-create.log into osds.yml , osd_objectstore also needs to be set to bluestore in both the osds.yml and all.yml files. The line should look like this in both osds.yml and all.yml : Set osd_scenario to lvm in osds.yml The osds.yml file should look similar to the following example: 10.2.11. Install Ceph for NVMe and Verify Success Run the site.yml Ansible playbook to install Ceph Verify Ceph is running properly after install completes Example ceph -s output showing Ceph is running properly: Example ceph osd tree output showing Ceph is running properly: Ceph is now set up to use two NVMe devices and LVM optimally for Object Storage Gateway.
[ "ansible-playbook infrastructure-playbooks/purge-cluster.yml -i hosts", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 4G 0 part │ └─md1 9:1 0 4G 0 raid1 [SWAP] ├─sda2 8:2 0 512M 0 part │ └─md0 9:0 0 512M 0 raid1 /boot └─sda3 8:3 0 461.3G 0 part └─md2 9:2 0 461.1G 0 raid1 / sdb 8:16 0 465.8G 0 disk ├─sdb1 8:17 0 4G 0 part │ └─md1 9:1 0 4G 0 raid1 [SWAP] ├─sdb2 8:18 0 512M 0 part │ └─md0 9:0 0 512M 0 raid1 /boot └─sdb3 8:19 0 461.3G 0 part └─md2 9:2 0 461.1G 0 raid1 / sdc 8:32 0 1.8T 0 disk sdd 8:48 0 1.8T 0 disk sde 8:64 0 1.8T 0 disk sdf 8:80 0 1.8T 0 disk sdg 8:96 0 1.8T 0 disk sdh 8:112 0 1.8T 0 disk sdi 8:128 0 1.8T 0 disk sdj 8:144 0 1.8T 0 disk sdk 8:160 0 1.8T 0 disk sdl 8:176 0 1.8T 0 disk sdm 8:192 0 1.8T 0 disk sdn 8:208 0 1.8T 0 disk sdo 8:224 0 1.8T 0 disk sdp 8:240 0 1.8T 0 disk sdq 65:0 0 1.8T 0 disk sdr 65:16 0 1.8T 0 disk sds 65:32 0 1.8T 0 disk sdt 65:48 0 1.8T 0 disk sdu 65:64 0 1.8T 0 disk sdv 65:80 0 1.8T 0 disk sdw 65:96 0 1.8T 0 disk sdx 65:112 0 1.8T 0 disk sdy 65:128 0 1.8T 0 disk sdz 65:144 0 1.8T 0 disk sdaa 65:160 0 1.8T 0 disk sdab 65:176 0 1.8T 0 disk sdac 65:192 0 1.8T 0 disk sdad 65:208 0 1.8T 0 disk sdae 65:224 0 1.8T 0 disk sdaf 65:240 0 1.8T 0 disk sdag 66:0 0 1.8T 0 disk sdah 66:16 0 1.8T 0 disk sdai 66:32 0 1.8T 0 disk sdaj 66:48 0 1.8T 0 disk sdak 66:64 0 1.8T 0 disk sdal 66:80 0 1.8T 0 disk nvme0n1 259:0 0 745.2G 0 disk nvme1n1 259:1 0 745.2G 0 disk", "cd /usr/share/ceph-ansible", "nvme_device: /dev/nvme0n1 hdd_devices: - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf", "device /dev/sdc excluded by a filter", "wipefs -a /dev/sdc /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdc: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdc: calling ioclt to re-read partition table: Success wipefs -a /dev/sdd /dev/sdd: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdd: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdd: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdd: calling ioclt to re-read partition table: Success wipefs -a /dev/sde /dev/sde: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sde: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sde: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sde: calling ioclt to re-read partition table: Success wipefs -a /dev/sdf /dev/sdf: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdf: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdf: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdf: calling ioclt to re-read partition table: Success", "ansible-playbook infrastructure-playbooks/lv-teardown.yml -i hosts", "ansible-playbook infrastructure-playbooks/lv-create.yml -i hosts", "- data: ceph-bucket-index-1 data_vg: ceph-nvme-vg-nvme0n1 journal: ceph-journal-bucket-index-1-nvme0n1 journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdc data_vg: ceph-hdd-vg-sdc journal: ceph-journal-sdc journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdd data_vg: ceph-hdd-vg-sdd journal: ceph-journal-sdd journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sde data_vg: ceph-hdd-vg-sde journal: ceph-journal-sde journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdf data_vg: ceph-hdd-vg-sdf journal: ceph-journal-sdf journal_vg: ceph-nvme-vg-nvme0n1", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 4G 0 part │ └─md1 9:1 0 4G 0 raid1 [SWAP] ├─sda2 8:2 0 512M 0 part │ └─md0 9:0 0 512M 0 raid1 /boot └─sda3 8:3 0 461.3G 0 part └─md2 9:2 0 461.1G 0 raid1 / sdb 8:16 0 465.8G 0 disk ├─sdb1 8:17 0 4G 0 part │ └─md1 9:1 0 4G 0 raid1 [SWAP] ├─sdb2 8:18 0 512M 0 part │ └─md0 9:0 0 512M 0 raid1 /boot └─sdb3 8:19 0 461.3G 0 part └─md2 9:2 0 461.1G 0 raid1 / sdc 8:32 0 1.8T 0 disk └─ceph--hdd--vg--sdc-ceph--hdd--lv--sdc 253:6 0 1.8T 0 lvm sdd 8:48 0 1.8T 0 disk └─ceph--hdd--vg--sdd-ceph--hdd--lv--sdd 253:7 0 1.8T 0 lvm sde 8:64 0 1.8T 0 disk └─ceph--hdd--vg--sde-ceph--hdd--lv--sde 253:8 0 1.8T 0 lvm sdf 8:80 0 1.8T 0 disk └─ceph--hdd--vg--sdf-ceph--hdd--lv--sdf 253:9 0 1.8T 0 lvm sdg 8:96 0 1.8T 0 disk sdh 8:112 0 1.8T 0 disk sdi 8:128 0 1.8T 0 disk sdj 8:144 0 1.8T 0 disk sdk 8:160 0 1.8T 0 disk sdl 8:176 0 1.8T 0 disk sdm 8:192 0 1.8T 0 disk sdn 8:208 0 1.8T 0 disk sdo 8:224 0 1.8T 0 disk sdp 8:240 0 1.8T 0 disk sdq 65:0 0 1.8T 0 disk sdr 65:16 0 1.8T 0 disk sds 65:32 0 1.8T 0 disk sdt 65:48 0 1.8T 0 disk sdu 65:64 0 1.8T 0 disk sdv 65:80 0 1.8T 0 disk sdw 65:96 0 1.8T 0 disk sdx 65:112 0 1.8T 0 disk sdy 65:128 0 1.8T 0 disk sdz 65:144 0 1.8T 0 disk sdaa 65:160 0 1.8T 0 disk sdab 65:176 0 1.8T 0 disk sdac 65:192 0 1.8T 0 disk sdad 65:208 0 1.8T 0 disk sdae 65:224 0 1.8T 0 disk sdaf 65:240 0 1.8T 0 disk sdag 66:0 0 1.8T 0 disk sdah 66:16 0 1.8T 0 disk sdai 66:32 0 1.8T 0 disk sdaj 66:48 0 1.8T 0 disk sdak 66:64 0 1.8T 0 disk sdal 66:80 0 1.8T 0 disk nvme0n1 259:0 0 745.2G 0 disk ├─ceph--nvme--vg--nvme0n1-ceph--journal--bucket--index--1--nvme0n1 253:0 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme0n1-ceph--journal--sdc 253:1 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme0n1-ceph--journal--sdd 253:2 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme0n1-ceph--journal--sde 253:3 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme0n1-ceph--journal--sdf 253:4 0 5.4G 0 lvm └─ceph--nvme--vg--nvme0n1-ceph--bucket--index--1 253:5 0 718.4G 0 lvm nvme1n1 259:1 0 745.2G 0 disk", "lvscan ACTIVE '/dev/ceph-hdd-vg-sdf/ceph-hdd-lv-sdf' [<1.82 TiB] inherit ACTIVE '/dev/ceph-hdd-vg-sde/ceph-hdd-lv-sde' [<1.82 TiB] inherit ACTIVE '/dev/ceph-hdd-vg-sdd/ceph-hdd-lv-sdd' [<1.82 TiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-bucket-index-1-nvme0n1' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-sdc' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-sdd' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-sde' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-sdf' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-bucket-index-1' [<718.36 GiB] inherit ACTIVE '/dev/ceph-hdd-vg-sdc/ceph-hdd-lv-sdc' [<1.82 TiB] inherit", "osd_scenario: lvm", "Variables here are applicable to all host groups NOT roles osd_objectstore: filestore osd_scenario: lvm lvm_volumes: - data: ceph-bucket-index-1 data_vg: ceph-nvme-vg-nvme0n1 journal: ceph-journal-bucket-index-1-nvme0n1 journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdc data_vg: ceph-hdd-vg-sdc journal: ceph-journal-sdc journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdd data_vg: ceph-hdd-vg-sdd journal: ceph-journal-sdd journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sde data_vg: ceph-hdd-vg-sde journal: ceph-journal-sde journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdf data_vg: ceph-hdd-vg-sdf journal: ceph-journal-sdf journal_vg: ceph-nvme-vg-nvme0n1", "ansible-playbook -v site.yml -i hosts", "ceph -s", "ceph osd tree", "ceph -s cluster: id: 15d31a8c-3152-4fa2-8c4e-809b750924cd health: HEALTH_WARN Reduced data availability: 32 pgs inactive services: mon: 3 daemons, quorum b08-h03-r620,b08-h05-r620,b08-h06-r620 mgr: b08-h03-r620(active), standbys: b08-h05-r620, b08-h06-r620 osd: 35 osds: 35 up, 35 in data: pools: 4 pools, 32 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs: 100.000% pgs unknown 32 unknown", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 55.81212 root default -15 7.97316 host c04-h01-6048r 13 hdd 1.81799 osd.13 up 1.00000 1.00000 20 hdd 1.81799 osd.20 up 1.00000 1.00000 26 hdd 1.81799 osd.26 up 1.00000 1.00000 32 hdd 1.81799 osd.32 up 1.00000 1.00000 6 ssd 0.70119 osd.6 up 1.00000 1.00000 -3 7.97316 host c04-h05-6048r 12 hdd 1.81799 osd.12 up 1.00000 1.00000 17 hdd 1.81799 osd.17 up 1.00000 1.00000 23 hdd 1.81799 osd.23 up 1.00000 1.00000 29 hdd 1.81799 osd.29 up 1.00000 1.00000 2 ssd 0.70119 osd.2 up 1.00000 1.00000 -13 7.97316 host c04-h09-6048r 11 hdd 1.81799 osd.11 up 1.00000 1.00000 16 hdd 1.81799 osd.16 up 1.00000 1.00000 22 hdd 1.81799 osd.22 up 1.00000 1.00000 27 hdd 1.81799 osd.27 up 1.00000 1.00000 4 ssd 0.70119 osd.4 up 1.00000 1.00000 -5 7.97316 host c04-h13-6048r 10 hdd 1.81799 osd.10 up 1.00000 1.00000 15 hdd 1.81799 osd.15 up 1.00000 1.00000 21 hdd 1.81799 osd.21 up 1.00000 1.00000 28 hdd 1.81799 osd.28 up 1.00000 1.00000 1 ssd 0.70119 osd.1 up 1.00000 1.00000 -9 7.97316 host c04-h21-6048r 8 hdd 1.81799 osd.8 up 1.00000 1.00000 18 hdd 1.81799 osd.18 up 1.00000 1.00000 25 hdd 1.81799 osd.25 up 1.00000 1.00000 30 hdd 1.81799 osd.30 up 1.00000 1.00000 5 ssd 0.70119 osd.5 up 1.00000 1.00000 -11 7.97316 host c04-h25-6048r 9 hdd 1.81799 osd.9 up 1.00000 1.00000 14 hdd 1.81799 osd.14 up 1.00000 1.00000 33 hdd 1.81799 osd.33 up 1.00000 1.00000 34 hdd 1.81799 osd.34 up 1.00000 1.00000 0 ssd 0.70119 osd.0 up 1.00000 1.00000 -7 7.97316 host c04-h29-6048r 7 hdd 1.81799 osd.7 up 1.00000 1.00000 19 hdd 1.81799 osd.19 up 1.00000 1.00000 24 hdd 1.81799 osd.24 up 1.00000 1.00000 31 hdd 1.81799 osd.31 up 1.00000 1.00000 3 ssd 0.70119 osd.3 up 1.00000 1.00000", "ansible-playbook purge-cluster.yml -i hosts", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 512M 0 part /boot └─sda2 8:2 0 465.3G 0 part ├─vg_c04--h09--6048r-lv_root 253:0 0 464.8G 0 lvm / └─vg_c04--h09--6048r-lv_swap 253:1 0 512M 0 lvm [SWAP] sdb 8:16 0 465.8G 0 disk sdc 8:32 0 1.8T 0 disk sdd 8:48 0 1.8T 0 disk sde 8:64 0 1.8T 0 disk sdf 8:80 0 1.8T 0 disk sdg 8:96 0 1.8T 0 disk sdh 8:112 0 1.8T 0 disk sdi 8:128 0 1.8T 0 disk sdj 8:144 0 1.8T 0 disk sdk 8:160 0 1.8T 0 disk sdl 8:176 0 1.8T 0 disk sdm 8:192 0 1.8T 0 disk sdn 8:208 0 1.8T 0 disk sdo 8:224 0 1.8T 0 disk sdp 8:240 0 1.8T 0 disk sdq 65:0 0 1.8T 0 disk sdr 65:16 0 1.8T 0 disk sds 65:32 0 1.8T 0 disk sdt 65:48 0 1.8T 0 disk sdu 65:64 0 1.8T 0 disk sdv 65:80 0 1.8T 0 disk sdw 65:96 0 1.8T 0 disk sdx 65:112 0 1.8T 0 disk sdy 65:128 0 1.8T 0 disk sdz 65:144 0 1.8T 0 disk sdaa 65:160 0 1.8T 0 disk sdab 65:176 0 1.8T 0 disk sdac 65:192 0 1.8T 0 disk sdad 65:208 0 1.8T 0 disk sdae 65:224 0 1.8T 0 disk sdaf 65:240 0 1.8T 0 disk sdag 66:0 0 1.8T 0 disk sdah 66:16 0 1.8T 0 disk sdai 66:32 0 1.8T 0 disk sdaj 66:48 0 1.8T 0 disk sdak 66:64 0 1.8T 0 disk sdal 66:80 0 1.8T 0 disk nvme0n1 259:1 0 745.2G 0 disk nvme1n1 259:0 0 745.2G 0 disk", "cd /usr/share/ceph-ansible", "cp infrastructure-playbooks/vars/lv_vars.yaml .", "nvme_device: /dev/nvme0n1 hdd_devices: - /dev/sdc - /dev/sdd", "cp infrastructure-playbooks/lv-create.yml .", "device /dev/sdc excluded by a filter", "wipefs -a /dev/sdc /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdc: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdc: calling ioclt to re-read partition table: Success wipefs -a /dev/sdd /dev/sdd: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdd: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdd: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdd: calling ioclt to re-read partition table: Success wipefs -a /dev/sde /dev/sde: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sde: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sde: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sde: calling ioclt to re-read partition table: Success wipefs -a /dev/sdf /dev/sdf: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdf: 8 bytes were erased at offset 0x1d19ffffe00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdf: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdf: calling ioclt to re-read partition table: Success", "cp infrastructure-playbooks/lv-teardown.yml .", "ansible-playbook lv-teardown.yml -i hosts", "ansible-playbook lv-create.yml -i hosts", "- data: ceph-bucket-index-1 data_vg: ceph-nvme-vg-nvme0n1 journal: ceph-journal-bucket-index-1-nvme0n1 journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdc data_vg: ceph-hdd-vg-sdc journal: ceph-journal-sdc journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdd data_vg: ceph-hdd-vg-sdd journal: ceph-journal-sdd journal_vg: ceph-nvme-vg-nvme0n1", "nvme_device: /dev/nvme1n1 hdd_devices: - /dev/sde - /dev/sdf", "ansible-playbook lv-teardown.yml -i hosts", "ansible-playbook lv-create.yml -i hosts", "- data: ceph-bucket-index-1 data_vg: ceph-nvme-vg-nvme1n1 journal: ceph-journal-bucket-index-1-nvme1n1 journal_vg: ceph-nvme-vg-nvme1n1 - data: ceph-hdd-lv-sde data_vg: ceph-hdd-vg-sde journal: ceph-journal-sde journal_vg: ceph-nvme-vg-nvme1n1 - data: ceph-hdd-lv-sdf data_vg: ceph-hdd-vg-sdf journal: ceph-journal-sdf journal_vg: ceph-nvme-vg-nvme1n1", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 4G 0 part │ └─md1 9:1 0 4G 0 raid1 [SWAP] ├─sda2 8:2 0 512M 0 part │ └─md0 9:0 0 512M 0 raid1 /boot └─sda3 8:3 0 461.3G 0 part └─md2 9:2 0 461.1G 0 raid1 / sdb 8:16 0 465.8G 0 disk ├─sdb1 8:17 0 4G 0 part │ └─md1 9:1 0 4G 0 raid1 [SWAP] ├─sdb2 8:18 0 512M 0 part │ └─md0 9:0 0 512M 0 raid1 /boot └─sdb3 8:19 0 461.3G 0 part └─md2 9:2 0 461.1G 0 raid1 / sdc 8:32 0 1.8T 0 disk └─ceph--hdd--vg--sdc-ceph--hdd--lv--sdc 253:4 0 1.8T 0 lvm sdd 8:48 0 1.8T 0 disk └─ceph--hdd--vg--sdd-ceph--hdd--lv--sdd 253:5 0 1.8T 0 lvm sde 8:64 0 1.8T 0 disk └─ceph--hdd--vg--sde-ceph--hdd--lv--sde 253:10 0 1.8T 0 lvm sdf 8:80 0 1.8T 0 disk └─ceph--hdd--vg--sdf-ceph--hdd--lv--sdf 253:11 0 1.8T 0 lvm sdg 8:96 0 1.8T 0 disk sdh 8:112 0 1.8T 0 disk sdi 8:128 0 1.8T 0 disk sdj 8:144 0 1.8T 0 disk sdk 8:160 0 1.8T 0 disk sdl 8:176 0 1.8T 0 disk sdm 8:192 0 1.8T 0 disk sdn 8:208 0 1.8T 0 disk sdo 8:224 0 1.8T 0 disk sdp 8:240 0 1.8T 0 disk sdq 65:0 0 1.8T 0 disk sdr 65:16 0 1.8T 0 disk sds 65:32 0 1.8T 0 disk sdt 65:48 0 1.8T 0 disk sdu 65:64 0 1.8T 0 disk sdv 65:80 0 1.8T 0 disk sdw 65:96 0 1.8T 0 disk sdx 65:112 0 1.8T 0 disk sdy 65:128 0 1.8T 0 disk sdz 65:144 0 1.8T 0 disk sdaa 65:160 0 1.8T 0 disk sdab 65:176 0 1.8T 0 disk sdac 65:192 0 1.8T 0 disk sdad 65:208 0 1.8T 0 disk sdae 65:224 0 1.8T 0 disk sdaf 65:240 0 1.8T 0 disk sdag 66:0 0 1.8T 0 disk sdah 66:16 0 1.8T 0 disk sdai 66:32 0 1.8T 0 disk sdaj 66:48 0 1.8T 0 disk sdak 66:64 0 1.8T 0 disk sdal 66:80 0 1.8T 0 disk nvme0n1 259:0 0 745.2G 0 disk ├─ceph--nvme--vg--nvme0n1-ceph--journal--bucket--index--1--nvme0n1 253:0 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme0n1-ceph--journal--sdc 253:1 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme0n1-ceph--journal--sdd 253:2 0 5.4G 0 lvm └─ceph--nvme--vg--nvme0n1-ceph--bucket--index--1 253:3 0 729.1G 0 lvm nvme1n1 259:1 0 745.2G 0 disk ├─ceph--nvme--vg--nvme1n1-ceph--journal--bucket--index--1--nvme1n1 253:6 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme1n1-ceph--journal--sde 253:7 0 5.4G 0 lvm ├─ceph--nvme--vg--nvme1n1-ceph--journal--sdf 253:8 0 5.4G 0 lvm └─ceph--nvme--vg--nvme1n1-ceph--bucket--index--1 253:9 0 729.1G 0 lvm", "lvscan ACTIVE '/dev/ceph-hdd-vg-sde/ceph-hdd-lv-sde' [<1.82 TiB] inherit ACTIVE '/dev/ceph-hdd-vg-sdc/ceph-hdd-lv-sdc' [<1.82 TiB] inherit ACTIVE '/dev/ceph-hdd-vg-sdf/ceph-hdd-lv-sdf' [<1.82 TiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme1n1/ceph-journal-bucket-index-1-nvme1n1' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme1n1/ceph-journal-sde' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme1n1/ceph-journal-sdf' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme1n1/ceph-bucket-index-1' [<729.10 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-bucket-index-1-nvme0n1' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-sdc' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-journal-sdd' [5.37 GiB] inherit ACTIVE '/dev/ceph-nvme-vg-nvme0n1/ceph-bucket-index-1' [<729.10 GiB] inherit ACTIVE '/dev/ceph-hdd-vg-sdd/ceph-hdd-lv-sdd' [<1.82 TiB] inherit", "osd_objectstore: bluestore", "Variables here are applicable to all host groups NOT roles osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: ceph-bucket-index-1 data_vg: ceph-nvme-vg-nvme0n1 journal: ceph-journal-bucket-index-1-nvme0n1 journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdc data_vg: ceph-hdd-vg-sdc journal: ceph-journal-sdc journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-hdd-lv-sdd data_vg: ceph-hdd-vg-sdd journal: ceph-journal-sdd journal_vg: ceph-nvme-vg-nvme0n1 - data: ceph-bucket-index-1 data_vg: ceph-nvme-vg-nvme1n1 journal: ceph-journal-bucket-index-1-nvme1n1 journal_vg: ceph-nvme-vg-nvme1n1 - data: ceph-hdd-lv-sde data_vg: ceph-hdd-vg-sde journal: ceph-journal-sde journal_vg: ceph-nvme-vg-nvme1n1 - data: ceph-hdd-lv-sdf data_vg: ceph-hdd-vg-sdf journal: ceph-journal-sdf journal_vg: ceph-nvme-vg-nvme1n1", "ansible-playbook -v site.yml -i hosts", "ceph -s", "ceph osd tree", "ceph -s cluster: id: 9ba22f4c-b53f-4c49-8c72-220aaf567c2b health: HEALTH_WARN Reduced data availability: 32 pgs inactive services: mon: 3 daemons, quorum b08-h03-r620,b08-h05-r620,b08-h06-r620 mgr: b08-h03-r620(active), standbys: b08-h05-r620, b08-h06-r620 osd: 42 osds: 42 up, 42 in data: pools: 4 pools, 32 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs: 100.000% pgs unknown 32 unknown", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 60.86740 root default -7 8.69534 host c04-h01-6048r 10 hdd 1.81799 osd.10 up 1.00000 1.00000 13 hdd 1.81799 osd.13 up 1.00000 1.00000 21 hdd 1.81799 osd.21 up 1.00000 1.00000 27 hdd 1.81799 osd.27 up 1.00000 1.00000 6 ssd 0.71169 osd.6 up 1.00000 1.00000 15 ssd 0.71169 osd.15 up 1.00000 1.00000 -3 8.69534 host c04-h05-6048r 7 hdd 1.81799 osd.7 up 1.00000 1.00000 20 hdd 1.81799 osd.20 up 1.00000 1.00000 29 hdd 1.81799 osd.29 up 1.00000 1.00000 38 hdd 1.81799 osd.38 up 1.00000 1.00000 4 ssd 0.71169 osd.4 up 1.00000 1.00000 25 ssd 0.71169 osd.25 up 1.00000 1.00000 -22 8.69534 host c04-h09-6048r 17 hdd 1.81799 osd.17 up 1.00000 1.00000 31 hdd 1.81799 osd.31 up 1.00000 1.00000 35 hdd 1.81799 osd.35 up 1.00000 1.00000 39 hdd 1.81799 osd.39 up 1.00000 1.00000 5 ssd 0.71169 osd.5 up 1.00000 1.00000 34 ssd 0.71169 osd.34 up 1.00000 1.00000 -9 8.69534 host c04-h13-6048r 8 hdd 1.81799 osd.8 up 1.00000 1.00000 11 hdd 1.81799 osd.11 up 1.00000 1.00000 30 hdd 1.81799 osd.30 up 1.00000 1.00000 32 hdd 1.81799 osd.32 up 1.00000 1.00000 0 ssd 0.71169 osd.0 up 1.00000 1.00000 26 ssd 0.71169 osd.26 up 1.00000 1.00000 -19 8.69534 host c04-h21-6048r 18 hdd 1.81799 osd.18 up 1.00000 1.00000 23 hdd 1.81799 osd.23 up 1.00000 1.00000 36 hdd 1.81799 osd.36 up 1.00000 1.00000 40 hdd 1.81799 osd.40 up 1.00000 1.00000 3 ssd 0.71169 osd.3 up 1.00000 1.00000 33 ssd 0.71169 osd.33 up 1.00000 1.00000 -16 8.69534 host c04-h25-6048r 16 hdd 1.81799 osd.16 up 1.00000 1.00000 22 hdd 1.81799 osd.22 up 1.00000 1.00000 37 hdd 1.81799 osd.37 up 1.00000 1.00000 41 hdd 1.81799 osd.41 up 1.00000 1.00000 1 ssd 0.71169 osd.1 up 1.00000 1.00000 28 ssd 0.71169 osd.28 up 1.00000 1.00000 -5 8.69534 host c04-h29-6048r 9 hdd 1.81799 osd.9 up 1.00000 1.00000 12 hdd 1.81799 osd.12 up 1.00000 1.00000 19 hdd 1.81799 osd.19 up 1.00000 1.00000 24 hdd 1.81799 osd.24 up 1.00000 1.00000 2 ssd 0.71169 osd.2 up 1.00000 1.00000 14 ssd 0.71169 osd.14 up 1.00000 1.00000" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_for_production_guide/using-nvme-with-lvm-optimally
Chapter 14. Installing a cluster on AWS with compute nodes on AWS Local Zones
Chapter 14. Installing a cluster on AWS with compute nodes on AWS Local Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Local Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Local Zone subnets. AWS Local Zones is an infrastructure that place Cloud Resources close to metropolitan regions. For more information, see the AWS Local Zones Documentation . 14.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Local Zones locations to create the network resources in. You read the AWS Local Zones features in the AWS documentation. You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can provide a user or role access for creating network network resources that support AWS Local Zones. Example of an additional IAM policy with the ec2:ModifyAvailabilityZoneGroup permission attached to an IAM user or role. { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 14.2. About AWS Local Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Local Zones environment. 14.2.1. Cluster limitations in AWS Local Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Local Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Local Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Local Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Local Zones subnets in an OpenShift Container Platform cluster. 14.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Local Zones locations. When deploying a cluster that uses Local Zones, consider the following points: Amazon EC2 instances in the Local Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Local Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How Local Zones work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Local Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Local Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=local-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Local Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 14.3. Installation prerequisites Before you install a cluster in an AWS Local Zones environment, you must configure your infrastructure so that it can adopt Local Zone capabilities. 14.3.1. Opting in to an AWS Local Zones If you plan to create subnets in AWS Local Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Local Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Local Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Local Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Local Zones where you want to create subnets. For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a (US East New York). 14.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 14.3.3. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 14.3.4. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 14.3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 14.3.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.4. Preparing for the installation Before you extend nodes to Local Zones, you must prepare certain resources for the cluster installation environment. 14.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 14.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 14.1. Machine types based on 64-bit x86 architecture for AWS Local Zones c5.* c5d.* m6i.* m5.* r5.* t3.* Additional resources See AWS Local Zones features in the AWS documentation. 14.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 14.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Local Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with a custom Amazon Elastic Block Store (EBS) type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Elastic Block Storage (EBS) types differ between locations. Check the AWS documentation to verify availability in the Local Zones in which the cluster runs. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 14.4.5. Customizing the cluster network MTU Before you deploy a cluster on AWS, you can customize the cluster network maximum transmission unit (MTU) for your cluster network to meet the needs of your infrastructure. By default, when you install a cluster with supported Local Zones capabilities, the MTU value for the cluster network is automatically adjusted to the lowest value that the network plugin accepts. Important Setting an unsupported MTU value for EC2 instances that operate in the Local Zones infrastructure can cause issues for your OpenShift Container Platform cluster. If the Local Zone supports higher MTU values in between EC2 instances in the Local Zone and the AWS Region, you can manually configure the higher value to increase the network performance of the cluster network. You can customize the MTU for a cluster by specifying the networking.clusterNetworkMTU parameter in the install-config.yaml configuration file. Important All subnets in Local Zones must support the higher MTU value, so that each node in that zone can successfully communicate with services in the AWS Region and deploy your workloads. Example of overwriting the default MTU value apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Additional resources For more information about the maximum supported maximum transmission unit (MTU) value, see AWS resources supported in Local Zones in the AWS documentation. 14.5. Cluster installation options for an AWS Local Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Local Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Local Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Local Zones environment: Installing a cluster quickly in AWS Local Zones Installing a cluster in an existing VPC with defined Local Zone subnets 14.6. Install a cluster quickly in AWS Local Zones For OpenShift Container Platform 4.15, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Local Zones locations. By using this installation route, the installation program automatically creates network resources and Local Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 14.6.1. Modifying an installation configuration file to use AWS Local Zones Modify an install-config.yaml file to include AWS Local Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Local Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Local Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #... 1 The AWS Region name. 2 The list of Local Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Local Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Local Zones steps Deploying the cluster 14.7. Installing a cluster in an existing VPC that has Local Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Local Zones. Local Zone subnets extend regular compute nodes to edge networks. Each edge compute nodes runs a user workload. After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge compute nodes to create user workloads in Local Zone subnets. Note If you want to create private subnets, you must either modify the provided CloudFormation template or create your own template. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 14.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Local Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Local Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Local Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 14.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 14.2. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 14.7.3. Creating subnets in Local Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Local Zones. Complete the following procedure for each Local Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Local Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 7 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 8 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 14.7.4. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones infrastructure. Example 14.3. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 14.7.5. Modifying an installation configuration file to use AWS Local Zones subnets Modify your install-config.yaml file to include Local Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Local Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Local Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Local Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1 # ... 1 List of subnet IDs created in the zones: Availability and Local Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 14.8. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Edge compute pools and AWS Local Zones". 14.9. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Local Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Local Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Local Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Local Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Local Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 14.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 14.11. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Local Zones. 14.11.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.11.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources For more information about accessing and understanding the OpenShift Container Platform web console, see Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 14.11.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Local Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f 14.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources For more information about the Telemetry service, see About remote health monitoring . steps Validating an installation . If necessary, you can opt out of remote health .
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #", "apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1", "./openshift-install create manifests --dir <installation_directory>", "spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/installing-aws-localzone
Chapter 24. Providing a custom class to your business application in Business Central
Chapter 24. Providing a custom class to your business application in Business Central To interact with Red Hat AMQ Streams, your business application requires a custom class in the following cases: You want to use a custom message format for sending or receiving messages using message events. You want to use a custom serializer class for the KafkaPublishMessages custom task. To use a custom class in your business application, use Business Central to upload the source code and configure the class. Alternatively, if you deploy your application on SpringBoot, you can compile the classes separately and include them in the class path. In this case, do not complete this procedure. Prerequisites You are logged in to Business Central and have permission to edit business processes. You created a project for your business process. Procedure Prepare Java source files with the required custom classes, for example, MyCustomSerializer . Use the package name for your space and project, for example, com.myspace.test . In Business Central, enter your project and click the Settings Dependencies tab. In the Dependencies field, add dependencies that your custom classes require, for example, org.apache.kafka.kafka-clients , as a comma-separated list. Click the Assets tab. For each of the class source files, complete the following steps: Click Import Asset . In the Please select a file to upload field, select the location of the Java source file for the custom serializer class. Click Ok .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/custom-class-provide-proc_integrating-amq-streams
Chapter 1. Prerequisites
Chapter 1. Prerequisites Red Hat Enterprise Linux (RHEL) 9 To obtain the latest version of Red Hat Enterprise Linux (RHEL) 9, see Download Red Hat Enterprise Linux . For installation instructions, see the Product Documentation for Red Hat Enterprise Linux 9 . An active subscription to Red Hat Two or more virtual CPUs 4 GB or more of RAM Approximately 30 GB of disk space on your test system, which can be broken down as follows: Approximately 10 GB of disk space for the Red Hat Enterprise Linux (RHEL) operating system. Approximately 10 GB of disk space for Docker storage for running three containers. Approximately 10 GB of disk space for Red Hat Quay local storage. Note CEPH or other local storage might require more memory. More information on sizing can be found at Quay 3.x Sizing Guidelines . The following architectures are supported for Red Hat Quay: amd64/x86_64 s390x ppc64le 1.1. Installing Podman This document uses Podman for creating and deploying containers. For more information on Podman and related technologies, see Building, running, and managing Linux containers on Red Hat Enterprise Linux 9 . Important If you do not have Podman installed on your system, the use of equivalent Docker commands might be possible, however this is not recommended. Docker has not been tested with Red Hat Quay 3, and will be deprecated in a future release. Podman is recommended for highly available, production quality deployments of Red Hat Quay 3. Use the following procedure to install Podman. Procedure Enter the following command to install Podman: USD sudo yum install -y podman Alternatively, you can install the container-tools module, which pulls in the full set of container software packages: USD sudo yum module install -y container-tools
[ "sudo yum install -y podman", "sudo yum module install -y container-tools" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/proof_of_concept_-_deploying_red_hat_quay/poc-prerequisites
Chapter 3. Targeted Policy
Chapter 3. Targeted Policy Targeted policy is the default SELinux policy used in Red Hat Enterprise Linux. When using targeted policy, processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. For example, by default, logged-in users run in the unconfined_t domain, and system processes started by init run in the unconfined_service_t domain; both of these domains are unconfined. Executable and writable memory checks may apply to both confined and unconfined domains. However, by default, subjects running in an unconfined domain can allocate writable memory and execute it. These memory checks can be enabled by setting Booleans, which allow the SELinux policy to be modified at runtime. Boolean configuration is discussed later. 3.1. Confined Processes Almost every service that listens on a network, such as sshd or httpd , is confined in Red Hat Enterprise Linux. Also, most processes that run as the root user and perform tasks for users, such as the passwd utility, are confined. When a process is confined, it runs in its own domain, such as the httpd process running in the httpd_t domain. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. Complete this procedure to ensure that SELinux is enabled and the system is prepared to perform the following example: Procedure 3.1. How to Verify SELinux Status Confirm that SELinux is enabled, is running in enforcing mode, and that targeted policy is being used. The correct output should look similar to the output below: See Section 4.4, "Permanent Changes in SELinux States and Modes" for detailed information about changing SELinux modes. As root, create a file in the /var/www/html/ directory: Enter the following command to view the SELinux context of the newly created file: By default, Linux users run unconfined in Red Hat Enterprise Linux, which is why the testfile file is labeled with the SELinux unconfined_u user. RBAC is used for processes, not files. Roles do not have a meaning for files; the object_r role is a generic role used for files (on persistent storage and network file systems). Under the /proc directory, files related to processes may use the system_r role. The httpd_sys_content_t type allows the httpd process to access this file. The following example demonstrates how SELinux prevents the Apache HTTP Server ( httpd ) from reading files that are not correctly labeled, such as files intended for use by Samba. This is an example, and should not be used in production. It assumes that the httpd and wget packages are installed, the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 3.2. An Example of Confined Process As root, start the httpd daemon: Confirm that the service is running. The output should include the information below (only the time stamp will differ): Change into a directory where your Linux user has write access to, and enter the following command. Unless there are changes to the default configuration, this command succeeds: The chcon command relabels files; however, such label changes do not survive when the file system is relabeled. For permanent changes that survive a file system relabel, use the semanage utility, which is discussed later. As root, enter the following command to change the type to a type used by Samba: Enter the following command to view the changes: Note that the current DAC permissions allow the httpd process access to testfile . Change into a directory where your user has write access to, and enter the following command. Unless there are changes to the default configuration, this command fails: As root, remove testfile : If you do not require httpd to be running, as root, enter the following command to stop it: This example demonstrates the additional security added by SELinux. Although DAC rules allowed the httpd process access to testfile in step 2, because the file was labeled with a type that the httpd process does not have access to, SELinux denied access. If the auditd daemon is running, an error similar to the following is logged to /var/log/audit/audit.log : Also, an error similar to the following is logged to /var/log/httpd/error_log :
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 30", "~]# touch /var/www/html/testfile", "~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/testfile", "~]# systemctl start httpd.service", "~]USD systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Mon 2013-08-05 14:00:55 CEST; 8s ago", "~]USD wget http://localhost/testfile --2009-11-06 17:43:01-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 0 [text/plain] Saving to: `testfile' [ <=> ] 0 --.-K/s in 0s 2009-11-06 17:43:01 (0.00 B/s) - `testfile' saved [0/0]", "~]# chcon -t samba_share_t /var/www/html/testfile", "~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:samba_share_t:s0 /var/www/html/testfile", "~]USD wget http://localhost/testfile --2009-11-06 14:11:23-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2009-11-06 14:11:23 ERROR 403: Forbidden.", "~]# rm -i /var/www/html/testfile", "~]# systemctl stop httpd.service", "type=AVC msg=audit(1220706212.937:70): avc: denied { getattr } for pid=1904 comm=\"httpd\" path=\"/var/www/html/testfile\" dev=sda5 ino=247576 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file type=SYSCALL msg=audit(1220706212.937:70): arch=40000003 syscall=196 success=no exit=-13 a0=b9e21da0 a1=bf9581dc a2=555ff4 a3=2008171 items=0 ppid=1902 pid=1904 auid=500 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)", "[Wed May 06 23:00:54 2009] [error] [client 127.0.0.1 ] (13)Permission denied: access to /testfile denied" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-Security-Enhanced_Linux-Targeted_Policy
Chapter 5. Encryption token is deleted or expired
Chapter 5. Encryption token is deleted or expired Use this procedure to update the token if the encryption token for your key management system gets deleted or expires. Prerequisites Ensure that you have a new token with the same policy as the deleted or expired token Procedure Log in to OpenShift Container Platform Web Console. Click Workloads Secrets To update the ocs-kms-token used for cluster wide encryption: Set the Project to openshift-storage . Click ocs-kms-token Actions Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . To update the ceph-csi-kms-token for a given project or namespace with encrypted persistent volumes: Select the required Project . Click ceph-csi-kms-token Actions Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/encryption-token-is-deleted-or-expired_rhodf
7.101. lftp
7.101. lftp 7.101.1. RHBA-2015:0793 - lftp bug fix update Updated lftp packages that fix several bugs are now available for Red Hat Enterprise Linux 6. LFTP is a file transfer utility for File Transfer Protocol (FTP), Secure Shell File Transfer Protocol (SFTP), Hypertext Transfer Protocol (HTTP), and other commonly used protocols. It uses the readline library for input, and provides support for bookmarks, built-in monitoring, job control, and parallel transfer of multiple files at the same time. Bug Fixes BZ# 619777 Previously, downloaded files with duplicated names were not renamed even when the "xfer:auto-rename" and "xfer:clobber" options were enabled. To fix this bug, the condition for renaming downloaded files has been modified and they are now renamed as expected. BZ# 674875 Prior to this update, the lftp manual page did not contain information on the "xfer:auto-rename" option. The option has been documented and added to the page, where it is now available to users. BZ# 732863 Due to a bug in error checking code, lftp could fail to connect to a remote host with an IPv6 address if the local host had only IPv4 connectivity, but the remote host domain name was resolved also to IPv6 addresses. With this update, the code has been amended, and the connectivity problems no longer occur in this situation. BZ# 842322 Due to an incorrect evaluation of the length of an uploaded file, the lftp tool became unresponsive after a file transfer in ASCII mode. With this update, the volume of transferred data is recognized correctly and the lftp program no longer hangs in this scenario. BZ# 928307 When running lftp in mirror mode on a website, lftp terminated with an error in cases of HTTP 302 redirection. To fix this bug, lftp has been amended and now successfully proceeds to the new location in such situations. BZ# 1193617 With the "cmd:fail-exit" option enabled, lftp could terminate unexpectedly when any command was executed after the "help" command. With this update, the "help" command has been amended to return correct return code, and lftp no longer exits in this scenario. Users of lftp are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-lftp
4.271. rhnlib
4.271. rhnlib 4.271.1. RHBA-2011:1665 - rhnlib bug fix update An updated rhnlib package that fixes various bugs is now available for Red Hat Enterprise Linux 6. The rhnlib package consists of a collection of Python modules used by the Red Hat Network (RHN) software. Bug Fixes BZ# 688095 Due to an error in the rhnlib code, network operations would have become unresponsive when an HTTP connection to Red Hat Network (RHN) or RHN Satellite became idle. The code has been modified to use timeout for HTTP connections. Network operations are now terminated after predefined time interval and can be restarted. BZ# 730744 Prior to this update, programs that used rhnlib were not able to connect to RHN or RHN Satellite using an IPv6 address. The code has been modified to correct this issue, and rhnlib-based applications are now able to connect to RHN or RHN Satellite without any problems with IPv6 address resolution. All users of rhnlib are advised to upgrade to this updated package, which resolves these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/rhnlib
Generating a custom LLM using RHEL AI
Generating a custom LLM using RHEL AI Red Hat Enterprise Linux AI 1.3 Using SDG, training, and evaluation to create a custom LLM Red Hat RHEL AI Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/generating_a_custom_llm_using_rhel_ai/index
Chapter 25. PHP (DEPRECATED)
Chapter 25. PHP (DEPRECATED) Overview PHP is a widely-used general-purpose scripting language that is especially suited for Web development. The PHP support is part of the camel-script module. Important PHP in Apache Camel is deprecated and will be removed in a future release. Adding the script module To use PHP in your routes you need to add a dependency on camel-script to your project as shown in Example 25.1, "Adding the camel-script dependency" . Example 25.1. Adding the camel-script dependency Static import To use the php() static method in your application code, include the following import statement in your Java source files: Built-in attributes Table 25.1, "PHP attributes" lists the built-in attributes that are accessible when using PHP. Table 25.1. PHP attributes Attribute Type Value context org.apache.camel.CamelContext The Camel Context exchange org.apache.camel.Exchange The current Exchange request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties org.apache.camel.builder.script.PropertiesFunction Function with a resolve method to make it easier to use the properties component inside scripts. The attributes all set at ENGINE_SCOPE . Example Example 25.2, "Route using PHP" shows a route that uses PHP. Example 25.2. Route using PHP Using the properties component To access a property value from the properties component, invoke the resolve method on the built-in properties attribute, as follows: Where PropKey is the key of the property you want to resolve, where the key value is of String type. For more details about the properties component, see Properties in the Apache Camel Component Reference Guide .
[ "<!-- Maven POM File --> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-script</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "import static org.apache.camel.builder.script.ScriptBuilder.*;", "<camelContext> <route> <from uri=\"direct:start\"/> <choice> <when> <language language=\"php\">strpos(request.headers.get('user'), 'admin')!== FALSE</language> <to uri=\"seda:adminQueue\"/> </when> <otherwise> <to uri=\"seda:regularQueue\"/> </otherwise> </choice> </route> </camelContext>", ".setHeader(\"myHeader\").php(\"properties.resolve( PropKey )\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/php
19.3.3. Fetchmail
19.3.3. Fetchmail Fetchmail is an MTA which retrieves email from remote servers and delivers it to the local MTA. Many users appreciate the ability to separate the process of downloading their messages located on a remote server from the process of reading and organizing their email in an MUA. Designed with the needs of dial-up users in mind, Fetchmail connects and quickly downloads all of the email messages to the mail spool file using any number of protocols, including POP3 and IMAP . It can even forward email messages to an SMTP server, if necessary. Note In order to use Fetchmail , first ensure the fetchmail package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . Fetchmail is configured for each user through the use of a .fetchmailrc file in the user's home directory. If it does not already exist, create the .fetchmailrc file in your home directory Using preferences in the .fetchmailrc file, Fetchmail checks for email on a remote server and downloads it. It then delivers it to port 25 on the local machine, using the local MTA to place the email in the correct user's spool file. If Procmail is available, it is launched to filter the email and place it in a mailbox so that it can be read by an MUA. 19.3.3.1. Fetchmail Configuration Options Although it is possible to pass all necessary options on the command line to check for email on a remote server when executing Fetchmail, using a .fetchmailrc file is much easier. Place any desired configuration options in the .fetchmailrc file for those options to be used each time the fetchmail command is issued. It is possible to override these at the time Fetchmail is run by specifying that option on the command line. A user's .fetchmailrc file contains three classes of configuration options: global options - Gives Fetchmail instructions that control the operation of the program or provide settings for every connection that checks for email. server options - Specifies necessary information about the server being polled, such as the host name, as well as preferences for specific email servers, such as the port to check or number of seconds to wait before timing out. These options affect every user using that server. user options - Contains information, such as user name and password, necessary to authenticate and check for email using a specified email server. Global options appear at the top of the .fetchmailrc file, followed by one or more server options, each of which designate a different email server that Fetchmail should check. User options follow server options for each user account checking that email server. Like server options, multiple user options may be specified for use with a particular server as well as to check multiple email accounts on the same server. Server options are called into service in the .fetchmailrc file by the use of a special option verb, poll or skip , that precedes any of the server information. The poll action tells Fetchmail to use this server option when it is run, which checks for email using the specified user options. Any server options after a skip action, however, are not checked unless this server's host name is specified when Fetchmail is invoked. The skip option is useful when testing configurations in the .fetchmailrc file because it only checks skipped servers when specifically invoked, and does not affect any currently working configurations. The following is an example of a .fetchmailrc file: In this example, the global options specify that the user is sent email as a last resort ( postmaster option) and all email errors are sent to the postmaster instead of the sender ( bouncemail option). The set action tells Fetchmail that this line contains a global option. Then, two email servers are specified, one set to check using POP3 , the other for trying various protocols to find one that works. Two users are checked using the second server option, but all email found for any user is sent to user1 's mail spool. This allows multiple mailboxes to be checked on multiple servers, while appearing in a single MUA inbox. Each user's specific information begins with the user action. Note Users are not required to place their password in the .fetchmailrc file. Omitting the with password '<password>' section causes Fetchmail to ask for a password when it is launched. Fetchmail has numerous global, server, and local options. Many of these options are rarely used or only apply to very specific situations. The fetchmail man page explains each option in detail, but the most common ones are listed in the following three sections. 19.3.3.2. Global Options Each global option should be placed on a single line after a set action. daemon seconds - Specifies daemon-mode, where Fetchmail stays in the background. Replace seconds with the number of seconds Fetchmail is to wait before polling the server. postmaster - Specifies a local user to send mail to in case of delivery problems. syslog - Specifies the log file for errors and status messages. By default, this is /var/log/maillog . 19.3.3.3. Server Options Server options must be placed on their own line in .fetchmailrc after a poll or skip action. auth auth-type - Replace auth-type with the type of authentication to be used. By default, password authentication is used, but some protocols support other types of authentication, including kerberos_v5 , kerberos_v4 , and ssh . If the any authentication type is used, Fetchmail first tries methods that do not require a password, then methods that mask the password, and finally attempts to send the password unencrypted to authenticate to the server. interval number - Polls the specified server every number of times that it checks for email on all configured servers. This option is generally used for email servers where the user rarely receives messages. port port-number - Replace port-number with the port number. This value overrides the default port number for the specified protocol. proto protocol - Replace protocol with the protocol, such as pop3 or imap , to use when checking for messages on the server. timeout seconds - Replace seconds with the number of seconds of server inactivity after which Fetchmail gives up on a connection attempt. If this value is not set, a default of 300 seconds is used. 19.3.3.4. User Options User options may be placed on their own lines beneath a server option or on the same line as the server option. In either case, the defined options must follow the user option (defined below). fetchall - Orders Fetchmail to download all messages in the queue, including messages that have already been viewed. By default, Fetchmail only pulls down new messages. fetchlimit number - Replace number with the number of messages to be retrieved before stopping. flush - Deletes all previously viewed messages in the queue before retrieving new messages. limit max-number-bytes - Replace max-number-bytes with the maximum size in bytes that messages are allowed to be when retrieved by Fetchmail. This option is useful with slow network links, when a large message takes too long to download. password ' password ' - Replace password with the user's password. preconnect " command " - Replace command with a command to be executed before retrieving messages for the user. postconnect " command " - Replace command with a command to be executed after retrieving messages for the user. ssl - Activates SSL encryption. At the time of writing, the default action is to use the best available from SSL2 , SSL3 , SSL23 , TLS1 , TLS1.1 and TLS1.2 . Note that SSL2 is considered obsolete and due to the POODLE: SSLv3 vulnerability (CVE-2014-3566) , SSLv3 should not be used. However there is no way to force the use of TLS1 or newer, therefore ensure the mail server being connected to is configured not to use SSLv2 and SSLv3 . Use stunnel where the server cannot be configured not to use SSLv2 and SSLv3 . sslproto - Defines allowed SSL or TLS protocols. Possible values are SSL2 , SSL3 , SSL23 , and TLS1 . The default value, if sslproto is omitted, unset, or set to an invalid value, is SSL23 . The default action is to use the best from SSLv3 , TLSv1 , TLS1.1 and TLS1.2 . Note that setting any other value for SSL or TLS will disable all the other protocols. Due to the POODLE: SSLv3 vulnerability (CVE-2014-3566) , it is recommend to omit this option, or set it to SSLv23 , and configure the corresponding mail server not to use SSLv2 and SSLv3 . Use stunnel where the server cannot be configured not to use SSLv2 and SSLv3 . user " username " - Replace username with the username used by Fetchmail to retrieve messages. This option must precede all other user options. 19.3.3.5. Fetchmail Command Options Most Fetchmail options used on the command line when executing the fetchmail command mirror the .fetchmailrc configuration options. In this way, Fetchmail may be used with or without a configuration file. These options are not used on the command line by most users because it is easier to leave them in the .fetchmailrc file. There may be times when it is desirable to run the fetchmail command with other options for a particular purpose. It is possible to issue command options to temporarily override a .fetchmailrc setting that is causing an error, as any options specified at the command line override configuration file options. 19.3.3.6. Informational or Debugging Options Certain options used after the fetchmail command can supply important information. --configdump - Displays every possible option based on information from .fetchmailrc and Fetchmail defaults. No email is retrieved for any users when using this option. -s - Executes Fetchmail in silent mode, preventing any messages, other than errors, from appearing after the fetchmail command. -v - Executes Fetchmail in verbose mode, displaying every communication between Fetchmail and remote email servers. -V - Displays detailed version information, lists its global options, and shows settings to be used with each user, including the email protocol and authentication method. No email is retrieved for any users when using this option. 19.3.3.7. Special Options These options are occasionally useful for overriding defaults often found in the .fetchmailrc file. -a - Fetchmail downloads all messages from the remote email server, whether new or previously viewed. By default, Fetchmail only downloads new messages. -k - Fetchmail leaves the messages on the remote email server after downloading them. This option overrides the default behavior of deleting messages after downloading them. -l max-number-bytes - Fetchmail does not download any messages over a particular size and leaves them on the remote email server. --quit - Quits the Fetchmail daemon process. More commands and .fetchmailrc options can be found in the fetchmail man page.
[ "~]# yum install fetchmail", "set postmaster \"user1\" set bouncemail poll pop.domain.com proto pop3 user 'user1' there with password 'secret' is user1 here poll mail.domain2.com user 'user5' there with password 'secret2' is user1 here user 'user7' there with password 'secret3' is user1 here" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-mta-fetchmail
Chapter 4. Accessing the registry
Chapter 4. Accessing the registry Use the following sections for instructions on accessing the registry, including viewing logs and metrics, as well as securing and exposing the registry. You can access the registry directly to invoke podman commands. This allows you to push images to or pull them from the integrated registry directly using operations like podman push or podman pull . To do so, you must be logged in to the registry using the podman login command. The operations you can perform depend on your user permissions, as described in the following sections. 4.1. Prerequisites You must have configured an identity provider (IDP). For pulling images, for example when using the podman pull command, the user must have the registry-viewer role. To add this role, run the following command: USD oc policy add-role-to-user registry-viewer <user_name> For writing or pushing images, for example when using the podman push command: The user must have the registry-editor role. To add this role, run the following command: USD oc policy add-role-to-user registry-editor <user_name> Your cluster must have an existing project where the images can be pushed to. 4.2. Accessing registry directly from the cluster You can access the registry from inside the cluster. Procedure Access the registry from the cluster by using internal routes: Access the node by getting the node's name: USD oc get nodes USD oc debug nodes/<node_name> To enable access to tools such as oc and podman on the node, change your root directory to /host : sh-4.2# chroot /host Log in to the container image registry by using your access token: sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443 sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 You should see a message confirming login, such as: Login Succeeded! Note You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure. Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name> . Perform podman pull and podman push operations against your registry: Important You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project. In the following examples, use: Component Value <registry_ip> 172.30.124.220 <port> 5000 <project> openshift <image> image <tag> omitted (defaults to latest ) Pull an arbitrary image: sh-4.2# podman pull <name.io>/<image> Tag the new image with the form <registry_ip>:<port>/<project>/<image> . The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry: sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image> Note You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the step will fail. To test, you can create a new project to push the image. Push the newly tagged image to your registry: sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image> 4.3. Checking the status of the registry pods As a cluster administrator, you can list the image registry pods running in the openshift-image-registry project and check their status. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure List the pods in the openshift-image-registry project and view their status: USD oc get pods -n openshift-image-registry Example output NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m 4.4. Viewing registry logs You can view the logs for the registry by using the oc logs command. Procedure Use the oc logs command with deployments to view the logs for the container image registry: USD oc logs deployments/image-registry -n openshift-image-registry Example output 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 4.5. Accessing registry metrics The OpenShift Container Registry provides an endpoint for Prometheus metrics . Prometheus is a stand-alone, open source systems monitoring and alerting toolkit. The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. Procedure You can access the metrics by running a metrics query using a cluster role. Cluster role Create a cluster role if you do not already have one to access the metrics: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF Add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user prometheus-scraper <username> Metrics query Get the user token. openshift: USD oc whoami -t Run a metrics query in node or inside a pod, for example: USD curl --insecure -s -u <user>:<secret> \ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20 Example output # HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. # TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit="9f72191",gitVersion="v3.11.0+9f72191-135-dirty",major="3",minor="11+"} 1 # HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. # TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type="Hit"} 5 imageregistry_digest_cache_requests_total{type="Miss"} 24 # HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. # TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type="Hit"} 33 imageregistry_digest_cache_scoped_requests_total{type="Miss"} 44 # HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. # TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 # HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. # TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method="get",quantile="0.5"} 0.01296087 imageregistry_http_request_duration_seconds{method="get",quantile="0.9"} 0.014847248 imageregistry_http_request_duration_seconds{method="get",quantile="0.99"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method="get"} 12.260727916000022 1 The <user> object can be arbitrary, but <secret> tag must use the user token. 4.6. Additional resources For more information on allowing pods in a project to reference images in another project, see Allowing pods to reference images across projects . A kubeadmin can access the registry until deleted. See Removing the kubeadmin user for more information. For more information on configuring an identity provider, see Understanding identity provider configuration .
[ "oc policy add-role-to-user registry-viewer <user_name>", "oc policy add-role-to-user registry-editor <user_name>", "oc get nodes", "oc debug nodes/<node_name>", "sh-4.2# chroot /host", "sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443", "sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "sh-4.2# podman pull <name.io>/<image>", "sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>", "sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>", "oc get pods -n openshift-image-registry", "NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m", "oc logs deployments/image-registry -n openshift-image-registry", "2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF", "oc adm policy add-cluster-role-to-user prometheus-scraper <username>", "openshift: oc whoami -t", "curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20", "HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/registry/accessing-the-registry
Chapter 19. Managing cloud provider credentials
Chapter 19. Managing cloud provider credentials 19.1. About the Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. 19.1.1. Modes By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint , passthrough , or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers. Mint : In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. Passthrough : In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. Manual : In manual mode, a user manages cloud credentials instead of the CCO. Manual with AWS Security Token Service : In manual mode, you can configure an AWS cluster to use Amazon Web Services Security Token Service (AWS STS). With this configuration, the CCO uses temporary credentials for different components. Manual with GCP Workload Identity : In manual mode, you can configure a GCP cluster to use GCP Workload Identity. With this configuration, the CCO uses temporary credentials for different components. Table 19.1. CCO mode support matrix Cloud provider Mint Passthrough Manual Alibaba Cloud X Amazon Web Services (AWS) X X X Microsoft Azure X [1] X Google Cloud Platform (GCP) X X X IBM Cloud X Nutanix X Red Hat OpenStack Platform (RHOSP) X Red Hat Virtualization (RHV) X VMware vSphere X Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.1.2. Determining the Cloud Credential Operator mode For platforms that support using the CCO in multiple modes, you can determine what mode the CCO is configured to use by using the web console or the CLI. Figure 19.1. Determining the CCO configuration 19.1.2.1. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, inspect the annotations on the cluster root secret: Navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials To view the CCO mode that the cluster is using, click 1 annotation under Annotations , and check the value field. The following values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, run the following command: USD oc get secret <secret_name> \ -n kube-system \ -o jsonpath \ --template '{ .metadata.annotations }' where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. This command displays the value of the .metadata.annotations parameter in the cluster root secret object. The following output values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.3. Default behavior For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs. By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs. If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installer fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered. If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials. To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can manually create IAM for AWS , Azure , and GCP . 19.1.4. Additional resources Cluster Operators reference page for the Cloud Credential Operator 19.2. Using mint mode Mint mode is supported for Amazon Web Services (AWS) and Google Cloud Platform (GCP). Mint mode is the default mode on the platforms for which it is supported. In this mode, the Cloud Credential Operator (CCO) uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. If the credential is not removed after installation, it is stored and used by the CCO to process CredentialsRequest CRs for components in the cluster and create new credentials for each with only the specific permissions that are required. The continuous reconciliation of cloud credentials in mint mode allows actions that require additional credentials or permissions, such as upgrading, to proceed. Mint mode stores the administrator-level credential in the cluster kube-system namespace. If this approach does not meet the security requirements of your organization, see Alternatives to storing administrator-level secrets in the kube-system project for AWS or GCP . 19.2.1. Mint mode permissions requirements When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user. 19.2.1.1. Amazon Web Services (AWS) permissions The credential you provide for mint mode in AWS must have the following permissions: iam:CreateAccessKey iam:CreateUser iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUser iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser iam:SimulatePrincipalPolicy 19.2.1.2. Google Cloud Platform (GCP) permissions The credential you provide for mint mode in GCP must have the following permissions: resourcemanager.projects.get serviceusage.services.list iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.roles.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 19.2.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> 19.2.3. Mint mode with removal or rotation of the administrator-level credential Currently, this mode is only supported on AWS and GCP. In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation. The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. The administrator-level credential is not stored in the cluster permanently. Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade. 19.2.3.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . Delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. 19.2.3.2. Removing cloud provider credentials After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . 19.2.4. Additional resources Alternatives to storing administrator-level secrets in the kube-system project for AWS Alternatives to storing administrator-level secrets in the kube-system project for GCP 19.3. Using passthrough mode Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere. In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode. Note Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.3.1. Passthrough mode permissions requirements When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for. 19.3.1.1. Amazon Web Services (AWS) permissions The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for AWS . 19.3.1.2. Microsoft Azure permissions The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for Azure . 19.3.1.3. Google Cloud Platform (GCP) permissions The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for GCP . 19.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role. 19.3.1.5. Red Hat Virtualization (RHV) permissions To install an OpenShift Container Platform cluster on RHV, the CCO requires a credential with the following privileges: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the specific cluster that is targeted for OpenShift Container Platform deployment 19.3.1.6. VMware vSphere permissions To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges: Table 19.2. Required vSphere privileges Category Privileges Datastore Allocate space Folder Create folder , Delete folder vSphere Tagging All privileges Network Assign network Resource Assign virtual machine to resource pool Profile-driven storage All privileges vApp All privileges Virtual machine All privileges 19.3.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Microsoft Azure secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region> On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster's infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests: USD cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r Example output mycluster-2mpcn This value would be used in the secret data as follows: azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> Red Hat OpenStack Platform (RHOSP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init> Red Hat Virtualization (RHV) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle> VMware vSphere secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password> 19.3.3. Passthrough mode credential maintenance If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating IAM for AWS , Azure , or GCP . 19.3.3.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials RHV ovirt-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 19.3.4. Reducing permissions after installation When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer. After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using. To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating IAM for AWS , Azure , or GCP . 19.3.5. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP 19.4. Using manual mode Manual mode is supported for Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, IBM Cloud, and Google Cloud Platform (GCP). In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster's cloud provider. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. For information about configuring your cloud provider to use manual mode, see the manual credentials management options for your cloud provider: Manually creating RAM resources for Alibaba Cloud Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP Configuring IAM for IBM Cloud Configuring IAM for Nutanix 19.4.1. Manual mode with cloud credentials created and managed outside of the cluster An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. With this configuration, the CCO uses temporary credentials for different components. For more information, see Using manual mode with Amazon Web Services Security Token Service or Using manual mode with GCP Workload Identity . 19.4.2. Updating cloud provider resources with manually maintained credentials Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. 19.4.2.1. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade. 19.4.3. Additional resources Manually creating RAM resources for Alibaba Cloud Manually creating IAM for AWS Using manual mode with Amazon Web Services Security Token Service Manually creating IAM for Azure Manually creating IAM for GCP Using manual mode with GCP Workload Identity Configuring IAM for IBM Cloud Configuring IAM for Nutanix 19.5. Using manual mode with Amazon Web Services Security Token Service Manual mode with STS is supported for Amazon Web Services (AWS). Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. 19.5.1. About manual mode with AWS Security Token Service In manual mode with STS, the individual OpenShift Container Platform cluster components use AWS Security Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. 19.5.1.1. AWS Security Token Service authentication process The AWS Security Token Service (STS) and the AssumeRole API action allow pods to retrieve access keys that are defined by an IAM role policy. The OpenShift Container Platform cluster includes a Kubernetes service account signing service. This service uses a private key to sign service account JSON web tokens (JWT). A pod that requires a service account token requests one through the pod specification. When the pod is created and assigned to a node, the node retrieves a signed service account from the service account signing service and mounts it onto the pod. Clusters that use STS contain an IAM role ID in their Kubernetes configuration secrets. Workloads assume the identity of this IAM role ID. The signed service account token issued to the workload aligns with the configuration in AWS, which allows AWS STS to grant access keys for the specified IAM role to the workload. AWS STS grants access keys only for requests that include service account tokens that meet the following conditions: The token name and namespace match the service account name and namespace. The token is signed by a key that matches the public key. The public key pair for the service account signing key used by the cluster is stored in an AWS S3 bucket. AWS STS federation validates that the service account token signature aligns with the public key stored in the S3 bucket. 19.5.1.2. Authentication flow for AWS STS The following diagram illustrates the authentication flow between AWS and the OpenShift Container Platform cluster when using AWS STS. Token signing is the Kubernetes service account signing service on the OpenShift Container Platform cluster. The Kubernetes service account in the pod is the signed service account token. Figure 19.2. AWS Security Token Service authentication flow Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider combined with AWS IAM roles. Service account tokens that are trusted by AWS IAM are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. 19.5.1.3. Token refreshing for AWS STS The signed service account token that a pod uses expires after a period of time. For clusters that use AWS STS, this time period is 3600 seconds, or one hour. The kubelet on the node that the pod is assigned to ensures that the token is refreshed. The kubelet attempts to rotate a token when it is older than 80 percent of its time to live. 19.5.1.4. OpenID Connect requirements for AWS STS You can store the public portion of the encryption keys for your OIDC configuration in a public or private S3 bucket. The OIDC spec requires the use of HTTPS. AWS services require a public endpoint to expose the OIDC documents in the form of JSON web key set (JWKS) public keys. This allows AWS services to validate the bound tokens signed by Kubernetes and determine whether to trust certificates. As a result, both S3 bucket options require a public HTTPS endpoint and private endpoints are not supported. To use AWS STS, the public AWS backbone for the AWS STS service must be able to communicate with a public S3 bucket or a private S3 bucket with a public CloudFront endpoint. You can choose which type of bucket to use when you process CredentialsRequest objects during installation: By default, the CCO utility ( ccoctl ) stores the OIDC configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. As an alternative, you can have the ccoctl utility store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL. 19.5.1.5. AWS component secret formats Using manual mode with STS changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: AWS secret format using long-lived credentials apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key> 1 The namespace for the component. 2 The name of the component secret. AWS secret format with STS apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4 1 The namespace for the component. 2 The name of the component secret. 3 The IAM role for the component. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.2. Installing an OpenShift Container Platform cluster configured for manual mode with STS To install a cluster that is configured to use the Cloud Credential Operator (CCO) in manual mode with STS: Configure the Cloud Credential Operator utility . Create the required AWS resources individually , or with a single command . Run the OpenShift Container Platform installer . Verify that the cluster is using short-lived credentials . Note Because the cluster is operating in manual mode when using STS, it is not able to create new credentials for components with the permissions that they require. When upgrading to a different minor version of OpenShift Container Platform, there are often new AWS permission requirements. Before upgrading a cluster that is using STS, the cluster administrator must manually ensure that the AWS permissions are sufficient for existing components and available to any new components. Additional resources Configuring the Cloud Credential Operator utility for a cluster update 19.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Table 19.3. Required AWS permissions Permission type Required permissions iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 19.5.2.2. Creating AWS resources with the Cloud Credential Operator utility You can use the CCO utility ( ccoctl ) to create the required AWS resources individually , or with a single command . 19.5.2.2.1. Creating AWS resources individually If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. For example, this option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster: USD ccoctl aws create-key-pair Example output: 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS: USD ccoctl aws create-identity-provider \ --name=<name> \ --region=<aws_region> \ --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public where: <name> is the name used to tag any cloud resources that are created for tracking. <aws-region> is the AWS region in which cloud resources will be created. <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output: 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster. Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract --credentials-requests \ --cloud=aws \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 --from=quay.io/<path_to>/ocp-release:<version> 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ll <path_to_ccoctl_output_dir>/manifests Example output: total 24 -rw-------. 1 <user> <user> 161 Apr 13 11:42 cluster-authentication-02-config.yaml -rw-------. 1 <user> <user> 379 Apr 13 11:59 openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml -rw-------. 1 <user> <user> 353 Apr 13 11:59 openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 355 Apr 13 11:59 openshift-image-registry-installer-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 339 Apr 13 11:59 openshift-ingress-operator-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 337 Apr 13 11:59 openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 19.5.2.2.2. Creating AWS resources with a single command If you do not need to review the JSON files that the ccoctl tool creates before modifying AWS resources, and if the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --cloud=aws \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 --from=quay.io/<path_to>/ocp-release:<version> 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on AWS 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6 1 The Machine API Operator CR is required. 2 The Cloud Credential Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Network Operator CR is required. 6 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output: cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 19.5.2.3. Running the installer Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform release image. Procedure Change to the directory that contains the installation program and create the install-config.yaml file: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . Create the required OpenShift Container Platform installation manifests: USD openshift-install create manifests Copy the manifests that ccoctl generated to the manifests directory that the installation program created: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the private key that the ccoctl generated in the tls directory to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . Run the OpenShift Container Platform installer: USD ./openshift-install create cluster 19.5.2.4. Verifying the installation Connect to the OpenShift Container Platform cluster. Verify that the cluster does not have root credentials: USD oc get secrets -n kube-system aws-creds The output should look similar to: Error from server (NotFound): secrets "aws-creds" not found Verify that the components are assuming the IAM roles that are specified in the secret manifests, instead of using credentials that are created by the CCO: Example command with the Image Registry Operator USD oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode The output should show the role and web identity token that are used by the component and look similar to: Example output with the Image Registry Operator [default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token 19.5.3. Additional resources Preparing to update a cluster with manually maintained credentials 19.6. Using manual mode with GCP Workload Identity Manual mode with GCP Workload Identity is supported for Google Cloud Platform (GCP). Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. 19.6.1. About manual mode with GCP Workload Identity In manual mode with GCP Workload Identity, the individual OpenShift Container Platform cluster components can impersonate IAM service accounts using short-term, limited-privilege credentials. Requests for new and refreshed credentials are automated by using an appropriately configured OpenID Connect (OIDC) identity provider combined with IAM service accounts. Service account tokens that are trusted by GCP are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. Tokens are refreshed after one hour. Figure 19.3. Workload Identity authentication flow Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are provided to individual OpenShift Container Platform components. GCP secret format apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3 1 The namespace for the component. 2 The name of the component secret. 3 The Base64 encoded service account. Content of the Base64 encoded service_account.json file using long-lived credentials { "type": "service_account", 1 "project_id": "<project_id>", "private_key_id": "<private_key_id>", "private_key": "<private_key>", 2 "client_email": "<client_email_address>", "client_id": "<client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>" } 1 The credential type is service_account . 2 The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not rotated. Content of the Base64 encoded service_account.json file using GCP Workload Identity { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", 2 "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken", 3 "credential_source": { "file": "<path_to_token>", 4 "format": { "type": "text" } } } 1 The credential type is external_account . 2 The target audience is the GCP Workload Identity provider. 3 The resource URL of the service account that can be impersonated with these credentials. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.6.2. Installing an OpenShift Container Platform cluster configured for manual mode with GCP Workload Identity To install a cluster that is configured to use the Cloud Credential Operator (CCO) in manual mode with GCP Workload Identity: Configure the Cloud Credential Operator utility . Create the required GCP resources . Run the OpenShift Container Platform installer . Verify that the cluster is using short-lived credentials . Note Because the cluster is operating in manual mode when using GCP Workload Identity, it is not able to create new credentials for components with the permissions that they require. When upgrading to a different minor version of OpenShift Container Platform, there are often new GCP permission requirements. Before upgrading a cluster that is using GCP Workload Identity, the cluster administrator must manually ensure that the GCP permissions are sufficient for existing components and available to any new components. Additional resources Configuring the Cloud Credential Operator utility for a cluster update 19.6.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 19.6.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --cloud=gcp \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 quay.io/<path_to>/ocp-release:<version> 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on GCP 0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: USD ccoctl gcp create-all \ --name=<name> \ --region=<gcp_region> \ --project=<gcp_project_id> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests where: <name> is the user-defined name for all created GCP resources used for tracking. <gcp_region> is the GCP region in which cloud resources will be created. <gcp_project_id> is the GCP project ID in which cloud resources will be created. <path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 19.6.2.3. Running the installer Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform release image. Procedure Change to the directory that contains the installation program and create the install-config.yaml file: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . Create the required OpenShift Container Platform installation manifests: USD openshift-install create manifests Copy the manifests that ccoctl generated to the manifests directory that the installation program created: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the private key that the ccoctl generated in the tls directory to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . Run the OpenShift Container Platform installer: USD ./openshift-install create cluster 19.6.2.4. Verifying the installation Connect to the OpenShift Container Platform cluster. Verify that the cluster does not have root credentials: USD oc get secrets -n kube-system gcp-credentials The output should look similar to: Error from server (NotFound): secrets "gcp-credentials" not found Verify that the components are assuming the service accounts that are specified in the secret manifests, instead of using credentials that are created by the CCO: Example command with the Image Registry Operator USD oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data."service_account.json"' | base64 -d The output should show the role and web identity token that are used by the component and look similar to: Example output with the Image Registry Operator { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken", 2 "credential_source": { "file": "/var/run/secrets/openshift/serviceaccount/token", "format": { "type": "text" } } } 1 The credential type is external_account . 2 The resource URL of the service account used by the Image Registry Operator. 19.6.3. Additional resources Preparing to update a cluster with manually maintained credentials
[ "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>", "cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r", "mycluster-2mpcn", "azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key>", "apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> --region=<aws_region> --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "oc adm release extract --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 --from=quay.io/<path_to>/ocp-release:<version>", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ll <path_to_ccoctl_output_dir>/manifests", "total 24 -rw-------. 1 <user> <user> 161 Apr 13 11:42 cluster-authentication-02-config.yaml -rw-------. 1 <user> <user> 379 Apr 13 11:59 openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml -rw-------. 1 <user> <user> 353 Apr 13 11:59 openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 355 Apr 13 11:59 openshift-image-registry-installer-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 339 Apr 13 11:59 openshift-ingress-operator-cloud-credentials-credentials.yaml -rw-------. 1 <user> <user> 337 Apr 13 11:59 openshift-machine-api-aws-cloud-credentials-credentials.yaml", "oc adm release extract --credentials-requests --cloud=aws --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --from=quay.io/<path_to>/ocp-release:<version>", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system aws-creds", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode", "[default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3", "{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc adm release extract --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 quay.io/<path_to>/ocp-release:<version>", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "ccoctl gcp create-all --name=<name> --region=<gcp_region> --project=<gcp_project_id> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster", "oc get secrets -n kube-system gcp-credentials", "Error from server (NotFound): secrets \"gcp-credentials\" not found", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data.\"service_account.json\"' | base64 -d", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken\", 2 \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/managing-cloud-provider-credentials
Chapter 13. Customizing GNOME Desktop Features
Chapter 13. Customizing GNOME Desktop Features This chapter mentions three key desktop features. After reading, you will know how to quickly terminate the X server by default for all users, how to enable the Compose key or how to disable command line access for the users. To make sure the changes you have made take effect, you need to update the dconf utility. The users will experience the difference when they log out and log in again. 13.1. Allowing and Disallowing Online Accounts The GNOME Online Accounts (GOA) are used for setting personal network accounts which are then automatically integrated with the GNOME Desktop and applications. The user can add their online accounts, such as Google, Facebook, Flickr, ownCloud, and others using the Online Accounts application. As a system administrator, you can enable all online accounts; selectively enable a few online accounts; disable all online accounts. Procedure 13.1. Configuring Online Accounts If you do not have the gnome-online-accounts package on your system, install it by running the following command as root: Create a keyfile for the local database in /etc/dconf/db/local.d/ goa , which contains the following configuration: For selectively enabling a few providers only: For disabling all providers: For allowing all available providers: Lock down the settings to prevent users from overriding them. If it does not exist, create a new directory named /etc/dconf/db/local.d/locks/ . Create a new file in /etc/dconf/db/local.d/locks/goa with the following contents: Update the system databases for the changes to take effect: Users must log out and back in again before the system-wide settings take effect.
[ "yum install gnome-online-accounts", "[org/gnome/online-accounts] whitelisted-providers= ['google', 'facebook']", "[org/gnome/online-accounts] whitelisted-providers= ['']", "[org/gnome/online-accounts] whitelisted-providers= ['all']", "Prevent users from changing values for the following key: /org/gnome/online-accounts", "dconf update" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/customize-gnome-desktop-features
Appendix B. Cluster Creation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7
Appendix B. Cluster Creation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 Configuring a Red Hat High Availability Cluster in Red Hat Enterprise Linux 7 with Pacemaker requires a different set of configuration tools with a different administrative interface than configuring a cluster in Red Hat Enterprise Linux 6 with rgmanager . Section B.1, "Cluster Creation with rgmanager and with Pacemaker" summarizes the configuration differences between the various cluster components. Red Hat Enterprise Linux 6.5 and later releases support cluster configuration with Pacemaker, using the pcs configuration tool. Section B.2, "Pacemaker Installation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7" summarizes the Pacemaker installation differences between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. B.1. Cluster Creation with rgmanager and with Pacemaker Table B.1, "Comparison of Cluster Configuration with rgmanager and with Pacemaker" provides a comparative summary of how you configure the components of a cluster with rgmanager in Red Hat Enterprise Linux 6 and with Pacemaker in Red Hat Enterprise Linux 7. Table B.1. Comparison of Cluster Configuration with rgmanager and with Pacemaker Configuration Component rgmanager Pacemaker Cluster configuration file The cluster configuration file on each node is cluster.conf file, which can can be edited directly. Otherwise, use the luci or ccs interface to define the cluster configuration. The cluster and Pacemaker configuration files are corosync.conf and cib.xml . Do not edit the cib.xml file directly; use the pcs or pcsd interface instead. Network setup Configure IP addresses and SSH before configuring the cluster. Configure IP addresses and SSH before configuring the cluster. Cluster Configuration Tools luci , ccs command, manual editing of cluster.conf file. pcs or pcsd . Installation Install rgmanager (which pulls in all dependencies, including ricci , luci , and the resource and fencing agents). If needed, install lvm2-cluster and gfs2-utils . Install pcs , and the fencing agents you require. If needed, install lvm2-cluster and gfs2-utils . Starting cluster services Start and enable cluster services with the following procedure: Start rgmanager , cman , and, if needed, clvmd and gfs2 . Start ricci , and start luci if using the luci interface. Run chkconfig on for the needed services so that they start at each runtime. Alternately, you can enter ccs --start to start and enable the cluster services. Start and enable cluster services with the following procedure: On every node, execute systemctl start pcsd.service , then systemctl enable pcsd.service to enable pcsd to start at runtime. On one node in the cluster, enter pcs cluster start --all to start corosync and pacemaker . Controlling access to configuration tools For luci , the root user or a user with luci permissions can access luci . All access requires the ricci password for the node. The pcsd gui requires that you authenticate as user hacluster , which is the common system user. The root user can set the password for hacluster . Cluster creation Name the cluster and define which nodes to include in the cluster with luci or ccs , or directly edit the cluster.conf file. Name the cluster and include nodes with pcs cluster setup command or with the pcsd Web UI. You can add nodes to an existing cluster with the pcs cluster node add command or with the pcsd Web UI. Propagating cluster configuration to all nodes When configuration a cluster with luci , propagation is automatic. With ccs , use the --sync option. You can also use the cman_tool version -r command. Propagation of the cluster and Pacemaker configuration files, corosync.conf and cib.xml , is automatic on cluster setup or when adding a node or resource. Global cluster properties The following feature are supported with rgmanager in Red Hat Enterprise Linux 6: * You can configure the system so that the system chooses which multicast address to use for IP multicasting in the cluster network. * If IP multicasting is not available, you can use UDP Unicast transport mechanism. * You can configure a cluster to use RRP protocol. Pacemaker in Red Hat Enterprise Linux 7 supports the following features for a cluster: * You can set no-quorum-policy for the cluster to specify what the system should do when the cluster does not have quorum. * For additional cluster properties you can set, see Table 12.1, "Cluster Properties" . Logging You can set global and daemon-specific logging configuration. See the file /etc/sysconfig/pacemaker for information on how to configure logging manually. Validating the cluster Cluster validation is automatic with luci and with ccs , using the cluster schema. The cluster is automatically validated on startup. The cluster is automatically validated on startup, or you can validate the cluster with pcs cluster verify . Quorum in two-node clusters With a two-node cluster, you can configure how the system determines quorum: * Configure a quorum disk * Use ccs or edit the cluster.conf file to set two_node=1 and expected_votes=1 to allow a single node to maintain quorum. pcs automatically adds the necessary options for a two-node cluster to corosync . Cluster status On luci , the current status of the cluster is visible in the various components of the interface, which can be refreshed. You can use the --getconf option of the ccs command to see current the configuration file. You can use the clustat command to display cluster status. You can display the current cluster status with the pcs status command. Resources You add resources of defined types and configure resource-specific properties with luci or the ccs command, or by editing the cluster.conf configuration file. You add resources of defined types and configure resource-specific properties with the pcs resource create command or with the pcsd Web UI. For general information on configuring cluster resources with Pacemaker see Chapter 6, Configuring Cluster Resources . Resource behavior, grouping, and start/stop order Define cluster services to configure how resources interact. With Pacemaker, you use resource groups as a shorthand method of defining a set of resources that need to be located together and started and stopped sequentially. In addition, you define how resources behave and interact in the following ways: * You set some aspects of resource behavior as resource options. * You use location constraints to determine which nodes a resource can run on. * You use order constraints to determine the order in which resources run. * You use colocation constraints to determine that the location of one resource depends on the location of another resource. For more complete information on these topics, see Chapter 6, Configuring Cluster Resources and Chapter 7, Resource Constraints . Resource administration: Moving, starting, stopping resources With luci , you can manage clusters, individual cluster nodes, and cluster services. With the ccs command, you can manage cluster. You can use the clusvadm to manage cluster services. You can temporarily disable a node so that it cannot host resources with the pcs cluster standby command, which causes the resources to migrate. You can stop a resource with the pcs resource disable command. Removing a cluster configuration completely With luci , you can select all nodes in a cluster for deletion to delete a cluster entirely. You can also remove the cluster.conf from each node in the cluster. You can remove a cluster configuration with the pcs cluster destroy command. Resources active on multiple nodes, resources active on multiple nodes in multiple modes No equivalent. With Pacemaker, you can clone resources so that they can run in multiple nodes, and you can define cloned resources as master and slave resources so that they can run in multiple modes. For information on cloned resources and master/slave resources, see Chapter 9, Advanced Configuration . Fencing -- single fence device per node Create fencing devices globally or locally and add them to nodes. You can define post-fail delay and post-join delay values for the cluster as a whole. Create a fencing device for each node with the pcs stonith create command or with the pcsd Web UI. For devices that can fence multiple nodes, you need to define them only once rather than separately for each node. You can also define pcmk_host_map to configure fencing devices for all nodes with a single command; for information on pcmk_host_map see Table 5.1, "General Properties of Fencing Devices" . You can define the stonith-timeout value for the cluster as a whole. Multiple (backup) fencing devices per node Define backup devices with luci or the ccs command, or by editing the cluster.conf file directly. Configure fencing levels.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ap-ha-rhel6-rhel7-haar
16.17.4. Create LVM Logical Volume
16.17.4. Create LVM Logical Volume Important LVM initial set up is not available during text-mode installation. If you need to create an LVM configuration from scratch, press Alt + F2 to use a different virtual console, and run the lvm command. To return to the text-mode installation, press Alt + F1 . Logical Volume Management (LVM) presents a simple logical view of underlying physical storage space, such as a hard drives or LUNs. Partitions on physical storage are represented as physical volumes that can be grouped together into volume groups . Each volume group can be divided into multiple logical volumes , each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks. To read more about LVM, refer to the Red Hat Enterprise Linux Deployment Guide . Note, LVM is only available in the graphical installation program. LVM Physical Volume Choose this option to configure a partition or device as an LVM physical volume. This option is the only choice available if your storage does not already contain LVM Volume Groups. This is the same dialog that appears when you add a standard partition - refer to Section 16.17.2, "Adding Partitions" for a description of the available options. Note, however, that File System Type must be set to physical volume (LVM) Figure 16.43. Create an LVM Physical Volume Make LVM Volume Group Choose this option to create LVM volume groups from the available LVM physical volumes, or to add existing logical volumes to a volume group. Figure 16.44. Make LVM Volume Group To assign one or more physical volumes to a volume group, first name the volume group. Then select the physical volumes to be used in the volume group. Finally, configure logical volumes on any volume groups using the Add , Edit and Delete options. You may not remove a physical volume from a volume group if doing so would leave insufficient space for that group's logical volumes. Take for example a volume group made up of two 5 GB LVM physical volume partitions, which contains an 8 GB logical volume. The installer would not allow you to remove either of the component physical volumes, since that would leave only 5 GB in the group for an 8 GB logical volume. If you reduce the total size of any logical volumes appropriately, you may then remove a physical volume from the volume group. In the example, reducing the size of the logical volume to 4 GB would allow you to remove one of the 5 GB physical volumes. Make Logical Volume Choose this option to create an LVM logical volume. Select a mount point, file system type, and size (in MB) just as if it were a standard disk partition. You can also choose a name for the logical volume and specify the volume group to which it will belong. Figure 16.45. Make Logical Volume
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Create_LVM-ppc
9.14. Choosing a Disk Encryption Passphrase
9.14. Choosing a Disk Encryption Passphrase If you selected the Encrypt System option, the installer prompts you for a passphrase with which to encrypt the partitions on the system. Partitions are encrypted using the Linux Unified Key Setup - refer to Appendix C, Disk Encryption for more information. Figure 9.38. Enter passphrase for encrypted partition Choose a passphrase and type it into each of the two fields in the dialog box. You must provide this passphrase every time that the system boots. Warning If you lose this passphrase, any encrypted partitions and the data on them will become completely inaccessible. There is no way to recover a lost passphrase. Note that if you perform a kickstart installation of Red Hat Enterprise Linux, you can save encryption passphrases and create backup encryption passphrases during installation. Refer to Section C.3.2, "Saving Passphrases" and Section C.3.3, "Creating and Saving Backup Passphrases" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/encrypt-x86
5. Developing Installer Add-ons
5. Developing Installer Add-ons 5.1. Introduction to Anaconda and Add-ons 5.1.1. Introduction to Anaconda Anaconda is the operating system installer used in Fedora, Red Hat Enterprise Linux, and their derivatives. It is a set of Python modules and scripts together with some additional files like Gtk widgets (written in C), systemd units, and dracut libraries. Together, they form a tool that allows users to set parameters of the resulting (target) system and then set such a system up on a machine. The installation process has four major steps: installation destination preparation (usually disk partitioning) package and data installation boot loader installation and configuration configuration of the newly installed system There are three ways you can control the installer and specify installation options. The most common approach is to use the graphical user interface (GUI). This interface is meant to allow users to install the system interactively with little or no configuration required before beginning the installation, and it should cover all common use cases, including setting up complicated partitioning layouts. The graphical interface also supports remote access over VNC , which allows you to use the GUI even on systems with no graphics cards or even attached monitor. However, there are still cases where this is not desired, but at the same time, you may want to perform an interactive installation. For these cases, a text mode (TUI) is available. The TUI works in a way similar to a monochrome line printer, which allows it to work even on serial consoles which do not support cursor movement, colors and other advanced features. The text mode is limited in that it only allows you to customize most common options, such as network settings, language options or installation (package) source; advanced features such as manual partitioning are not available in this interface. The third way to install a system using Anaconda is by using a Kickstart file - a plain text file with shell-like syntax which can contain data to drive the installation process. A Kickstart file allows you to partially or completely automate the installation. A certain set of commands which configures all required areas is necessary to completely automate the installation; if one or more of the required commands is missing, the installation will require interaction. If all required commands are present, the installation will be performed in a completely automatic way, without any need for interaction. Kickstart provides the highest amount of options, covering use cases where neither the TUI nor the GUI is sufficient. Every feature in Anaconda must always be supported in Kickstart; other interfaces follow only subsets of all available options, which allows them to remain clear. 5.1.2. Firstboot and Initial Setup The first boot of the newly installed system is traditionally considered a part of the installation process as well, because some parts of configuration such as user creation are often performed at this point. Previously, the Firstboot tool has been used for this purpose, allowing you to register your newly installer Red Hat Enterprise Linux system or configure Kdump . However, Firstboot relies on no longer maintained tools such as Gtk2 and the pygtk2 module. [1] For this reason, a new tool called Initial Setup was developed, which reuses code from Anaconda . This allows add-ons developed for Anaconda to be easily reused in Initial Setup . This topic is further discussed in Section 5.6, "Writing an Anaconda add-on" . 5.1.3. Anaconda and Initial Setup Add-ons Installing a new operating system is a vastly complicated use case - each user may want to do something slightly different. Designing an installer for every corner case would cause it to be cluttered with rarely-used functionality. For this reason, when the installer was being rewritten into its current form, it gained support for add-ons. Anaconda add-ons can be used to add your own Kickstart commands and options as well as new configuration screens in the graphical and text-based user interface, depending on your specific use case. Each add-on must have Kickstart support; the GUI and TUI are optional, but can be very helpful. In current releases of Red Hat Enterprise Linux (7.1 and later) and Fedora [2] (21 and later), one add-on is included by default: The Kdump add-on, which adds support for configuring kernel crash dumping during the installation. This add-on has full support in Kickstart (using the %addon com_redhat_kdump command and its options) and is fully integrated as an additional screen in the text-based and graphical interfaces. You can develop other add-ons in the same way and add them to the default installer using procedures described further in this guide. 5.1.4. Additional Information Following links contain additional information about Anaconda and Initial Setup : The Anaconda page on Fedora Project Wiki contains provides more information about the installer. Information about development of Anaconda into its current version is available at the Anaconda/NewInstaller Wiki page . The Kickstart Installations chapter of the Red Hat Enterprise Linux 7 Installation Guide provides full documentation of Kickstart, including a list of all supported commands and options. The Installing Using Anaconda chapter of the Red Hat Enterprise Linux 7 Installation Guide describes the installation process in the graphical and text user interfaces. For information about tools used for after-installation configuration, see Initial Setup and Firstboot . 5.2. Architecture of Anaconda Anaconda is a set of Python modules and scripts. It also uses several external packages and libraries, some of which were created specifically for the installer. Major components of this toolset include the following packages: pykickstart - used to parse and validate Kickstart files and also to provide a data structure which stores values which drive the installation yum - the package manager which handles installation of packages and resolving dependencies blivet - originally split from the anaconda package as pyanaconda.storage ; used to handle all activities related to storage management pyanaconda - package containing the core of the user interface and modules for functionality unique to Anaconda , such as keyboard and timezone selection, network configuration, and user creation, as well as a number of utilities and system-oriented functions python-meh - contains an exception handler which gathers and stores additional system information in case of a crash and passes this information to the libreport library, which itself is a part of the ABRT Project . The life cycle of data during the installation process is straightforward. If a Kickstart file is provided, it is processed by the pykickstart module and imported into memory as a tree-like structure. If no Kickstart file is provided, an empty tree-like structure is created instead. If the installation is interactive (not all required Kickstart commands have been used), the structure is then updated with choices made by the user in the interactive interface. Once all required choices are made, the installation process begins and values stored in the structure are used to determine parameters of the installation. The values are also written as a Kickstart file which is saved in the /root/ directory on the installed system; therefore the installation can be replicated automatically by reusing this automatically generated Kickstart file. Elements of the tree-like structure are defined by the pykickstart package, but some of them can be overriden by modified versions from the pyanaconda.kickstart module. An important rule which governs this behavior is that there is no place to store configuration data, and the installation process is data-driven and relies on transactions as much as possible. This enforces the following features: every feature of the installer must be supported in Kickstart there is a single, obvious point in the installation process where changes are written to the target system; before this point, no lasting changes (e.g. formatting storage) are made every change made manually in the user interface is reflected in the resulting Kickstart file and can be replicated The fact that the installation is data-driven means that installation and configuration logic lies within the methods of the items in the tree-like structure. Every item is set up (the setup method) to modify the runtime environment of the installation if necessary, and then executed (the execute method) to perform the changes on the target system. These methods are further described in Section 5.6, "Writing an Anaconda add-on" . 5.3. The Hub & Spoke model One of the notable differences between Anaconda and most other operating system installers is its non-linear nature, also known as the hub and spoke model. The hub and spoke model of Anaconda has several advantages, including: users are not forced to go through the screens in some strictly defined order users are not forced to visit every screen no matter if they understand what the options configured in it mean or not it is good for the transactional mode where all desired values can be set while nothing is actually happening to the underlying machine until a special button is clicked it provides way to show an overview of the configured values it has a great support for extensibility, because additional spokes can be put on hubs without need to reorder anything and resolve some complex ordering dependencies it can be used for both graphical and text mode of the installer The diagram below shows the installer layout as well as possible interactions between hubs and spokes (screens): Figure 2. Diagram of the hub and spoke model In the diagram, screens 2-13 are called normal spokes , and screens 1 and 14 are standalone spokes . Standalone spokes are a type of screen which is a type of screen that should be used only in case it has to be visited before (or after) the following (or ) standalone spoke or hub. This may be, for example, the Welcome screen at the beginning of the installation which prompts you to choose your language for the rest of the installation. Note Screens mentioned in the rest of this section are screens from the installer's graphical interface (GUI). Central points of the hub and spoke model are hubs . There are two hubs by default: The Installation Summary hub which shows a summary of configured options before the installation begins The Configuration and Progress hub which appears after you click Begin Installation in Installation Summary , and which displays the progress of the installation process and allows you to configure additional options (set the root password and create a user account). Each spoke has several predefined properties which are reflected on the hub. These are: ready - states whether the spoke can be visited or not; for example, when the installer is configuring a package source, that spoke is not ready, is colored gray, and cannot be accessed until configuration is complete completed - marks the spoke as completed (all required values are set) or not mandatory - determines whether the spoke must be visited and confirmed by the user before continuing the installation; for example, the Installation Destination spoke must always be visited, even if you want to use automatic disk partitioning status - provides a short summary of values configured within the spoke (displayed under the spoke name in the hub) To make the user interface clearer, spokes are grouped together into categories . For example, the Localization category groups together spokes for keyboard layout selection, language support and time zone settings. Each spoke contains UI controls which display and allow you to modify values from one or more sub-trees of the in-memory tree-like structure which was discussed in Section 5.2, "Architecture of Anaconda" . As Section 5.6, "Writing an Anaconda add-on" explains, the same applies to spokes provided by add-ons. 5.4. Threads and Communication Some of the actions which need to be performed during the installation process, such as scanning disks for existing partitions or downloading package metadata, can take a long time. To prevent you from waiting and remain responsive if possible, Anaconda runs these actions in separate threads. The Gtk toolkit does not support element changes from multiple threads. The main event loop of Gtk runs in the main thread of the Anaconda process itself, and all code performing actions which involve the GUI must make sure that these actions are run in the main thread as well. The only supported way to do so is by using the GLib.idle_add , which is not always easy or desired. To alleviate this problem, several helper functions and decorators are defined in the pyanaconda.ui.gui.utils module. The most useful of those are the @gtk_action_wait and @gtk_action_nowait decorators. They change the decorated function or method in such a way that when this function or method is called, it is automatically queued into Gtk's main loop, run in the main thread, and the return value is either returned to the caller or dropped, respectively. As mentioned previously, one of the main reasons for using multiple threads is to allow the user to configure some screens while other screens which are currently busy (such as Installation Source when it downloads package metadata) configure themselves. Once the configuration is finished, the spoke which was previously busy needs to announce that it is now ready and not blocked; this is handled by a message queue called hubQ , which is being periodically checked in the main event loop. When a spoke becomes accessible, it sends a message to this queue announcing this change and that it should no longer be blocked. The same applies in a situation where a spoke needs to refresh its status or completion flag. The Configuration and Progress hub has a different queue called progressQ which serves as a medium to transfer installation progress updates. These mechanisms are also needed for the text-based interface, where the situation is more complicated; there is no main loop in text mode, instead the majority of time in this mode is spent waiting for keyboard input. 5.5. Anaconda Add-on Structure An Anaconda add-on is a Python package containing a directory with an __init__.py and other source directories (subpackages) inside. Because Python allows importing each package name only once, the package top-level directory name must be unique. At the same time, the name can be arbitrary, because add-ons are loaded regardless of their name - the only requirement is that they must be placed in a specific directory. The suggested naming convention for add-ons is therefore similar to Java packages or D-Bus service names: prefix the add-on name with the reversed domain name of your organization, using underscores ( _ ) instead of dots so that the directory name is a valid identifier for a Python package. An example add-on name following these suggestions would therefore be e.g. com_example_hello_world . This convention follows the recommended naming scheme for Python package and module names. Important Make sure to create an __init__.py file in each directory. Directories missing this file are not considered valid Python packages. When writing an add-on, keep in mind that every function supported in the installer must be supported in Kickstart; GUI and TUI support is optional. Support for each interface (Kickstart, graphical interface and text interface) must be in a separate subpackage and these subpackages must be named ks for Kickstart, gui for the graphical interface and tui for the text-based interface. The gui and tui packages must also contain a spokes subpackage. [3] Names of modules inside these packages are arbitrary; the ks/ , gui/ and tui/ directories can contain Python modules with any name. A sample directory structure for an add-on which supports every interface (Kickstart, GUI and TUI) will look similar to the following: Example 2. Sample Add-on Structure Each package must contain at least one module with an arbitrary name defining classes inherited from one or more classes defined in the API. This is further discussed in Section 5.6, "Writing an Anaconda add-on" . All add-ons should follow Python's PEP 8 and PEP 257 guidelines for docstring conventions. There is no consensus on the format of the actual content of docstrings in Anaconda ; the only requirement is that they are human-readable. If you plan to use automatically generated documentation for your add-on, docstrings should follow the guidelines for the toolkit you use to accomplish this. 5.6. Writing an Anaconda add-on The sections below will demonstrate the process writing and testing a sample add-on called Hello World. This sample add-on will support all interfaces (Kickstart, GUI and TUI). Sources for this sample add-on are available on GitHub in the rhinstaller/hello-world-anaconda-addon repository; it is recommended to clone this repository or at least open the sources in the web interface. Another repository to review is rhinstaller/anaconda , which contains the installer source code; it will be referred to in several parts of this section as well. Before you begin developing the add-on itself, start by creating its directory structure as described in Section 5.5, "Anaconda Add-on Structure" . Then, continue with Section 5.6.1, "Kickstart Support" , as Kickstart support is mandatory for all add-ons. After that, you can optionally continue with Section 5.6.2, "Graphical user interface" and Section 5.6.3, "Text User Interface" if needed. 5.6.1. Kickstart Support Kickstart support is always the first part of any add-on that should be developed. Other packages - support for the graphical and text-based interface - will depend on it. To begin, navigate to the com_example_hello_world/ks/ directory you have created previously, make sure it contains an __init__.py file, and add another Python script named hello_world.py . Unlike built-in Kickstart commands, add-ons are used in their own sections . Each use of an add-on in a Kickstart file begins with an %addon statement and is closed by %end . The %addon line also contains the name of the add-on (such as %addon com_example_hello_world ) and optionally a list of arguments, if the add-on supports them. An example use of an add-on in a Kickstart file looks like the example below: Example 3. Using an Add-on in a Kickstart File The key class for Kickstart support in add-ons is called AddonData . This class is defined in pyanaconda.addons and represents an object for parsing and storing data from a Kickstart file. Arguments are passed as a list to an instance of the add-on class inherited from the AddonData class. Anything between the first and last line is passed to the add-on's class one line at a time. To keep the example Hello World add-on simple, it will merge all lines in this block into a single line and separate the original lines with a space. The example add-on requires a class inherited from AddonData with a method for handling the list of arguments from the %addon line, and a method for handling lines inside the section. The pyanaconda/addons.py module contains two methods which can be used for this: handle_header - takes a list of arguments from the %addon line (and line numbers for error reporting) handle_line - takes a single line of content from between the %addon and %end statements The example below demonstrates a Hello World add-on which uses the methods described above: Example 4. Using handle_header and handle_line from pyanaconda.addons import AddonData from pykickstart.options import KSOptionParser # export HelloWorldData class to prevent Anaconda's collect method from taking # AddonData class instead of the HelloWorldData class # :see: pyanaconda.kickstart.AnacondaKSHandler.__init__ __all__ = ["HelloWorldData"] HELLO_FILE_PATH = "/root/hello_world_addon_output.txt" class HelloWorldData(AddonData): """ Class parsing and storing data for the Hello world addon. :see: pyanaconda.addons.AddonData """ def __init__(self, name): """ :param name: name of the addon :type name: str """ AddonData.__init__(self, name) self.text = "" self.reverse = False def handle_header(self, lineno, args): """ The handle_header method is called to parse additional arguments in the %addon section line. :param lineno: the current linenumber in the kickstart file :type lineno: int :param args: any additional arguments after %addon <name> :type args: list """ op = KSOptionParser() op.add_option("--reverse", action="store_true", default=False, dest="reverse", help="Reverse the display of the addon text") (opts, extra) = op.parse_args(args=args, lineno=lineno) # Reject any additoinal arguments. Since AddonData.handle_header # rejects any arguments, we can use it to create an error message # and raise an exception. if extra: AddonData.handle_header(self, lineno, extra) # Store the result of the option parsing self.reverse = opts.reverse def handle_line(self, line): """ The handle_line method that is called with every line from this addon's %addon section of the kickstart file. :param line: a single line from the %addon section :type line: str """ # simple example, we just append lines to the text attribute if self.text is "": self.text = line.strip() else: self.text += " " + line.strip() The example begins by importing necessary methods and defining an __all__ variable which is necessary to prevent Anaconda 's collect method from taking the AddonData class instead of add-on specific HelloWorldData . Then, the example shows a definition of the HelloWorldData class inherited from AddonData with its __init__ method calling the parent's __init__ and initializing the attributes self.text and self.reverse to False . The self.reverse attribute is populated in the handle_header method, and the self.text is populated in handle_line . The handle_header method uses an instance of the KSOptionParser provided by pykickstart to parse additional options used on the %addon line, and handle_line strips the content lines of white space at the beginning and end of each line, and appends them to self.text . The code above covers the first phase of the data life cycle in the installation process: it reads data from the Kickstart file. The step is to use this data to drive the installation process. Two predefined methods are available for this purpose: setup - called before the installation transaction starts and used to make changes to the installation runtime environment execute - called at the end of the transaction and used to make changes to the target system To use these two methods, you must add some new imports and a constant to your module, as shown in the following example: Example 5. Importing the setup and execute Methods import os.path from pyanaconda.addons import AddonData from pyanaconda.constants import ROOT_PATH HELLO_FILE_PATH = "/root/hello_world_addon_output.txt" An updated example of the Hello World add-ons with the setup and execute methods included is below: Example 6. Using the setup and execute Methods def setup(self, storage, ksdata, instclass, payload): """ The setup method that should make changes to the runtime environment according to the data stored in this object. :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet instance :param ksdata: data parsed from the kickstart file and set in the installation process :type ksdata: pykickstart.base.BaseHandler instance :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass :param payload: object managing packages and environment groups for the installation :type payload: any class inherited from the pyanaconda.packaging.Payload class """ # no actions needed in this addon pass def execute(self, storage, ksdata, instclass, users, payload): """ The execute method that should make changes to the installed system. It is called only once in the post-install setup phase. :see: setup :param users: information about created users :type users: pyanaconda.users.Users instance """ hello_file_path = os.path.normpath(ROOT_PATH + HELLO_FILE_PATH) with open(hello_file_path, "w") as fobj: fobj.write("%s\n" % self.text) In the above example, the setup method does nothing; the Hello World add-on does not make any changes to the installation runtime environment. The execute method writes stored text into a file created in the target system's root ( / ) directory. The most important information in the above example is the amount and meaning of the arguments passed to the two new methods; these are described in docstrings within the example. The final phase of the data life cycle, as well as the last part of the code needed in a module providing Kickstart support, is generating a new Kickstart file, which includes values set at installation time, at the end of the installation process as described in Section 5.2, "Architecture of Anaconda" . This is performed by calling the __str__ method recursively on the tree-like structure storing installation data, which means that the class inherited from AddonData must define its own __str__ method which returns its stored data in valid Kickstart syntax. This returned data must be possible to parse again using pykickstart . In the Hello World example, the __str__ method will be similar to the following example: Example 7. Defining a __str__ Method def __str__(self): """ What should end up in the resulting kickstart file, i.e. the %addon section containing string representation of the stored data. """ addon_str = "%%addon %s" % self.name if self.reverse: addon_str += "--reverse" addon_str += "\n%s\n%%end" % self.text return addon_str Once your Kickstart support module contains all necessary methods ( handle_header , handle_line , setup , execute and __str__ ), it becomes a valid Anaconda add-on. You can continue with the following sections to add support for the graphical and text-based user interfaces, or you can continue with Section 5.7, "Deploying and testing an Anaconda add-on" and test the add-on. 5.6.2. Graphical user interface This section will describe adding support for the graphical user interface (GUI) to your add-on. Before you begin, make sure that your add-on already includes support for Kickstart as described in the section. Note Before you start developing add-ons with support for the graphical interface, make sure to install the anaconda-widgets and anaconda-widgets-devel packages, which contain Gtk widgets specific for Anaconda such as SpokeWindow . 5.6.2.1. Basic features Similarly to Kickstart support in add-ons, GUI support requires every part of the add-on to contain at least one module with a definition of a class inherited from a particular class defined by the API. In case of graphical support, the only recommended class is NormalSpoke , which is defined in pyanaconda.ui.gui.spokes . As the class name suggests, it is a class for the normal spoke type of screen as described in Section 5.3, "The Hub & Spoke model" . To implement a new class inherited from NormalSpoke , you must define the following class attributes which are required by the API: builderObjects - lists all top-level objects from the spoke's .glade file that should be, with their children objects (recursively), exposed to the spoke - or should be an empty list if everything should be exposed to the spoke (not recommended) mainWidgetName - contains the id of the main window widget [4] as defined in the .glade file uiFile - contains the name of the .glade file category - contains the class of the category the spoke belongs to icon - contains the identifier of the icon that will be used for the spoke on the hub title defines the title that will be used for the spoke on the hub Example module with all required definitions is shown in the following example: Example 8. Defining Attributes Required for the Normalspoke Class # will never be translated _ = lambda x: x N_ = lambda x: x # the path to addons is in sys.path so we can import things from org_fedora_hello_world from org_fedora_hello_world.gui.categories.hello_world import HelloWorldCategory from pyanaconda.ui.gui.spokes import NormalSpoke # export only the spoke, no helper functions, classes or constants __all__ = ["HelloWorldSpoke"] class HelloWorldSpoke(NormalSpoke): """ Class for the Hello world spoke. This spoke will be in the Hello world category and thus on the Summary hub. It is a very simple example of a unit for the Anaconda's graphical user interface. :see: pyanaconda.ui.common.UIObject :see: pyanaconda.ui.common.Spoke :see: pyanaconda.ui.gui.GUIObject """ ### class attributes defined by API ### # list all top-level objects from the .glade file that should be exposed # to the spoke or leave empty to extract everything builderObjects = ["helloWorldSpokeWindow", "buttonImage"] # the name of the main window widget mainWidgetName = "helloWorldSpokeWindow" # name of the .glade file in the same directory as this source uiFile = "hello_world.glade" # category this spoke belongs to category = HelloWorldCategory # spoke icon (will be displayed on the hub) # preferred are the -symbolic icons as these are used in Anaconda's spokes icon = "face-cool-symbolic" # title of the spoke (will be displayed on the hub) title = N_("_HELLO WORLD") The __all__ attribute is used to export the spoke class, followed by the first lines of its definition including definitions of attributes mentioned above. The values of these attributes are referencing widgets defined in com_example_hello_world/gui/spokes/hello.glade file. Two other notable attributes are present. The first is category , which has its value imported from the HelloWorldCategory class from the com_example_hello_world.gui.categories module. The HelloWorldCategory class will be discussed later, but for now, note that the path to add-ons is in sys.path so that things can be imported from the com_example_hello_world package. The second notable attribute in the example is title , which contains two underscores in its definition. The first one is part of the N_ function name which marks the string for translation, but returns the non-translated version of the string (translation is done later). The second underscore marks the beginning of the title itself and makes the spoke reachable using the Alt + H keyboard shortcut. What usually follows the header of the class definition and the class attributes definitions is the constructor that initializes an instance of the class. In case of the Anaconda graphical interface objects there are two methods initializing a new instance: the __init__ method and the initialize method. The reason for two such functions is that the GUI objects may be created in memory at one time and fully initialized (which can take a longer time) at a different time. Therefore, the __init__ method should only call the parent's __init__ method and (for example) initialize non-GUI attributes. On the other hand, the initialize method that is called when the installer's graphical user interface initializes should finish the full initialization of the spoke. In the sample Hello World add-on, these two methods are defined as follows (note the number and description of the arguments passed to the __init__ method): Example 9. Defining the __init__ and initialize Methods def __init__(self, data, storage, payload, instclass): """ :see: pyanaconda.ui.common.Spoke.__init__ :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass """ NormalSpoke.__init__(self, data, storage, payload, instclass) def initialize(self): """ The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize """ NormalSpoke.initialize(self) self._entry = self.builder.get_object("textEntry") Note the data parameter passed to the __init__ method. This is the in-memory tree-like representation of the Kickstart file where all data is stored. In one of the ancestors' __init__ methods it is stored in the self.data attribute, which allows all other methods in the class to read and modify the structure. Because the HelloWorldData class has already been defined in Section 5.6.1, "Kickstart Support" , there already is a subtree in self.data for this add-on, and its root (an instance of the class) is available as self.data.addons.com_example_hello_world . One of the other things an ancestor's __init__ does is initializing an instance of the GtkBuilder with the spoke's .glade file and storing it as self.builder . This is used in the initialize method to get the GtkTextEntry used to show and modify the text from the kickstart file's %addon section. The __init__ and initialize methods are both important when the spoke is created. However, the main role of the spoke is to be visited by an user who wants to change or review the values this spoke shows and sets. To enable this, three other methods are available: refresh - called when the spoke is about to be visited; This method refreshes the state of the spoke (mainly its UI elements) to make sure that current values stored in the self.data structure are displayed apply - called when the spoke is left and used to store values from UI elements back into the self.data structure execute - called when the spoke is left and used to perform any runtime changes based on the new state of the spoke These functions are implemented in the sample Hello World add-on in the following way: Example 10. Defining the refresh, apply and execute Methods def refresh(self): """ The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh """ self._entry.set_text(self.data.addons.org_fedora_hello_world.text) def apply(self): """ The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the GUI elements. """ self.data.addons.org_fedora_hello_world.text = self._entry.get_text() def execute(self): """ The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the GUI elements. """ # nothing to do here pass You can use several additional methods to control the spoke's state: ready - determines whether the spoke is ready to be visited; if the value is false, the spoke is not accessible (e.g. the Package Selection spoke before a package source is configured) completed - determines if the spoke has been completed mandatory - determines if the spoke is mandatory or not (e.g. the Installation Destination spoke, which must be always visited, even if you want to use automatic partitioning) All of these attributes need to be dynamically determined based on the current state of the installation process. Below is a sample implementation of these methods in the Hello World add-on, which requires some value to be set in the text attribute of the HelloWorldData class: Example 11. Defining the ready, completed and mandatory Methods @property def ready(self): """ The ready property that tells whether the spoke is ready (can be visited) or not. The spoke is made (in)sensitive based on the returned value. :rtype: bool """ # this spoke is always ready return True @property def completed(self): """ The completed property that tells whether all mandatory items on the spoke are set, or not. The spoke will be marked on the hub as completed or uncompleted acording to the returned value. :rtype: bool """ return bool(self.data.addons.org_fedora_hello_world.text) @property def mandatory(self): """ The mandatory property that tells whether the spoke is mandatory to be completed to continue in the installation process. :rtype: bool """ # this is an optional spoke that is not mandatory to be completed return False After defining these properties, the spoke can control its accessibility and completeness, but it cannot provide a summary of the values configured within - you must visit the spoke to see how it is configured, which may not be desired. For this reason, an additional property called status exists; this property contains a single line of text with a short summary of configured values, which can then be displayed in the hub under the spoke title. The status property is defined in the Hello World example add-on as follows: Example 12. Defining the status Property @property def status(self): """ The status property that is a brief string describing the state of the spoke. It should describe whether all values are set and if possible also the values themselves. The returned value will appear on the hub below the spoke's title. :rtype: str """ text = self.data.addons.org_fedora_hello_world.text # If --reverse was specified in the kickstart, reverse the text if self.data.addons.org_fedora_hello_world.reverse: text = text[::-1] if text: return _("Text set: %s") % text else: return _("Text not set") After defining all properties described in this chapter, the add-on has full support for the graphical user interface as well as Kickstart. Note that the example demonstrated here is very simple and does not contain any controls; knowledge of Python Gtk programming is required to develop a functional, interactive spoke in the GUI. One notable restriction is that each spoke must have its own main window - an instance of the SpokeWindow widget. This widget, along with some other widgets specific to Anaconda , is found in the anaconda-widgets package. Other files required for development of add-ons with GUI support (such as Glade definitions) can be found in the anaconda-widgets-devel package. Once your graphical interface support module contains all necessary methods you can continue with the following section to add support for the text-based user interface, or you can continue with Section 5.7, "Deploying and testing an Anaconda add-on" and test the add-on. 5.6.2.2. Advanced features The pyanaconda package contains several helper and utility functions and constructs which may be used by hubs and spokes and which have not been covered in the section. Most of them are located in pyanaconda.ui.gui.utils . The sample Hello World add-on demonstrates usage of the englightbox content manager which is also used in Anaconda . This manager can put a window into a lightbox to increase its visibility and focus it and to prevent users interacting with the underlying window. To demonstrate this function, the sample add-on contains a button which opens a new dialog window; the dialog itself is a special HelloWorldDialog inheriting from the GUIObject class, which is defined in pyanaconda.ui.gui.__init__ . The dialog class defines the run method which runs and destroys an internal Gtk dialog accessible through the self.window attribute, which is populated using a mainWidgetName class attribute with the same meaning. Therefore, the code defining the dialog is very simple, as demonstrated in the following example: Example 13. Defining a englightbox Dialog # every GUIObject gets ksdata in __init__ dialog = HelloWorldDialog(self.data) # show dialog above the lightbox with enlightbox(self.window, dialog.window): dialog.run() The code above creates an instance of the dialog and then uses the enlightbox context manager to run the dialog within a lightbox. The context manager needs a reference to the window of the spoke and to the dialog's window to instantiate the lightbox for them. Another useful feature provided by Anaconda is the ability to define a spoke which will appear both during the installation and after the first reboot (in the Initial Setup utility described in Section 5.1.2, "Firstboot and Initial Setup" ). To make a spoke available in both Anaconda and Initial Setup , you must inherit the special FirstbootSpokeMixIn (or, more precisely, mixin) as the first inherited class defined in the pyanaconda.ui.common module. If you want to make a certain spoke available only in Initial Setup , you should instead inherit the FirstbootOnlySpokeMixIn class. There are many more advanced features provided by the pyanaconda package (like the @gtk_action_wait and @gtk_action_nowait decorators), but they are out of scope of this guide. Readers are recommended to go through the installer's sources for examples. 5.6.3. Text User Interface The third supported interface, after Kickstart and GUI which have been discussed in sections, Anaconda also supports a text-based interface. This interface is more limited in its capabilities, but on some systems it may be the only choice for an interactive installation. For more information about differences between the text-based and graphical interface and about limitations of the TUI, see Section 5.1.1, "Introduction to Anaconda" . To add support for the text interface into your add-on, create a new set of subpackages under the tui directory as described in Section 5.5, "Anaconda Add-on Structure" . Text mode support in the installer is based on the simpleline utility, which only allows very simple user interaction. It does not support cursor movement (instead acting like a line printer) nor any visual enhancements like using different colors or fonts. Internally, there are three main classes in the simpleline toolkit: App , UIScreen and Widget . Widgets, which are units containing information to be shown (printed) on the screen, are placed on UIScreens which are switched by a single instance of the App class. On top of the basic elements, there are hubs , spokes and dialogs , all containing various widgets in a way similar to the graphical interface. For an add-on, the most important classes are NormalTUISpoke and various other classes defined in the pyanaconda.ui.tui.spokes package. All of those classes are based on the TUIObject class, which itself is an equivalent of the GUIObject class discussed in the chapter. Each TUI spoke is a Python class inheriting from the NormalTUISpoke class, overriding special arguments and methods defined by the API. Because the text interface is simpler than the GUI, there are only two such arguments: title - determines the title of the spoke, same as the title argument in the GUI category - determines the category of the spoke as a string; the category name is not displayed anywhere, it is only used for grouping Note Categories are handled differently than in the GUI. [5] It is recommended to assign a pre-existing category to your new spoke. Creating a new category would require patching Anaconda , and brings little benefit. Each spoke is also expected to override several methods, namely __init__ , initialize , refresh , refresh , apply , execute , input , and prompt , and properties ( ready , completed , mandatory , and status ). All of these have already been described in Section 5.6.2, "Graphical user interface" . The example below shows the implementation of a simple TUI spoke in the Hello World sample add-on: Example 14. Defining a Simple TUI Spoke def __init__(self, app, data, storage, payload, instclass): """ :see: pyanaconda.ui.tui.base.UIScreen :see: pyanaconda.ui.tui.base.App :param app: reference to application which is a main class for TUI screen handling, it is responsible for mainloop control and keeping track of the stack where all TUI screens are scheduled :type app: instance of pyanaconda.ui.tui.base.App :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass """ NormalTUISpoke.__init__(self, app, data, storage, payload, instclass) self._entered_text = "" def initialize(self): """ The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize """ NormalTUISpoke.initialize(self) def refresh(self, args=None): """ The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh :see: pyanaconda.ui.tui.base.UIScreen.refresh :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :return: whether this screen requests input or not :rtype: bool """ self._entered_text = self.data.addons.org_fedora_hello_world.text return True def apply(self): """ The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the spoke. """ self.data.addons.org_fedora_hello_world.text = self._entered_text def execute(self): """ The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the spoke. """ # nothing to do here pass def input(self, args, key): """ The input method that is called by the main loop on user's input. :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :param key: user's input :type key: unicode :return: if the input should not be handled here, return it, otherwise return True or False if the input was processed succesfully or not respectively :rtype: bool|unicode """ if key: self._entered_text = key # no other actions scheduled, apply changes self.apply() # close the current screen (remove it from the stack) self.close() return True def prompt(self, args=None): """ The prompt method that is called by the main loop to get the prompt for this screen. :param args: optional argument that can be passed to App.switch_screen* methods :type args: anything :return: text that should be used in the prompt for the input :rtype: unicode|None """ return _("Enter a new text or leave empty to use the old one: ") It is not necessary to override the __init__ method if it only calls the ancestor's __init__ , but the comments in the example describe the arguments passed to constructors of spoke classes in an understandable way. The initialize method sets up a default value for the internal attribute of the spoke, which is then updated by the refresh method and used by the apply method to update Kickstart data. The only differences in these two methods from their equivalents in the GUI is the return type of the refresh method ( bool instead of None ) and an additional args argument they take. The meaning of the returned value is explained in the comments - it tells the application (the App class instance) whether this spoke requires user input or not. The additional args argument is used for passing extra information to the spoke when scheduled. The execute method has the same purpose as the equivalent method in the GUI; in this case, the method does nothing. Methods input and prompt are specific to the text interface; there are no equivalents in Kickstart or GUI. These two methods are responsible for user interaction. The prompt method should return a prompt which will be displayed after the content of the spoke is printed. After a string is entered in reaction to the prompt, this string is passed to the input method for processing. The input method then processes the entered string and takes action depending on its type and value. The above example asks for any value and then stores it as an internal attribute ( key ). In more complicated add-ons, you typically need to perform some non-trivial actions, such as parse c as "continue" or r as "refresh", convert numbers into integers, show additional screens or toggle boolean values. Return value of the input class must be either the INPUT_PROCESSED or INPUT_DISCARDED constant (both of these are defined in the pyanaconda.constants_text module), or the input string itself (in case this input should be processed by a different screen). In contrast to the graphical mode, the apply method is not called automatically when leaving the spoke; it must be called explicitly from the input method. The same applies to closing (hiding) the spoke's screen, which is done by calling the close method. To show another screen (for example, if you need additional information which was entered in a different spoke), you can instantiate another TUIObject and call one of the self.app.switch_screen* methods of the App . Due to restrictions of the text-based interface, TUI spokes tend to have a very similar structure: a list of checkboxes or entries which should be checked or unchecked and populated by the user. The paragraphs show a way to implement a TUI spoke where the its methods handle printing and processing of the available and provided data. However, there is a different way to accomplish this using the EditTUISpoke class from the pyanaconda.ui.tui.spokes package. By inheriting this class, you can implement a typical TUI spoke by only specifying fields and attributes which should be set in it. The example below demonstrates this: Example 15. Using EditTUISpoke to Define a Text Interface Spoke class _EditData(object): """Auxiliary class for storing data from the example EditSpoke""" def __init__(self): """Trivial constructor just defining the fields that will store data""" self.checked = False self.shown_input = "" self.hidden_input = "" class HelloWorldEditSpoke(EditTUISpoke): """Example class demonstrating usage of EditTUISpoke inheritance""" title = _("Hello World Edit") category = "localization" # simple RE used to specify we only accept a single word as a valid input _valid_input = re.compile(r'\w+') # special class attribute defining spoke's entries as: # Entry(TITLE, ATTRIBUTE, CHECKING_RE or TYPE, SHOW_FUNC or SHOW) # where: # TITLE specifies descriptive title of the entry # ATTRIBUTE specifies attribute of self.args that should be set to the # value entered by the user (may contain dots, i.e. may specify # a deep attribute) # CHECKING_RE specifies compiled RE used for deciding about # accepting/rejecting user's input # TYPE may be one of EditTUISpoke.CHECK or EditTUISpoke.PASSWORD used # instead of CHECKING_RE for simple checkboxes or password entries, # respectively # SHOW_FUNC is a function taking self and self.args and returning True or # False indicating whether the entry should be shown or not # SHOW is a boolean value that may be used instead of the SHOW_FUNC # # :see: pyanaconda.ui.tui.spokes.EditTUISpoke edit_fields = [ Entry("Simple checkbox", "checked", EditTUISpoke.CHECK, True), Entry("Always shown input", "shown_input", _valid_input, True), Entry("Conditioned input", "hidden_input", _valid_input, lambda self, args: bool(args.shown_input)), ] def __init__(self, app, data, storage, payload, instclass): EditTUISpoke.__init__(self, app, data, storage, payload, instclass) # just populate the self.args attribute to have a store for data # typically self.data or a subtree of self.data is used as self.args self.args = _EditData() @property def completed(self): # completed if user entered something non-empty to the Conditioned input return bool(self.args.hidden_input) @property def status(self): return "Hidden input %s" % ("entered" if self.args.hidden_input else "not entered") def apply(self): # nothing needed here, values are set in the self.args tree pass The auxiliary class _EditData serves as a data container which is used to store values entered by the user. The HelloWorldEditSpoke class defines a simple spoke with one checkbox and two entries, all of which are instances of the EditTUISpokeEntry class imported as the Entry class). The first one is shown every time the spoke is displayed, the second instance is only shown if the first one contains a non-empty value. For more information about the EditTUISpoke class, see the comments in the above example. 5.7. Deploying and testing an Anaconda add-on To test a new add-on, you must load it into the installation environment. Add-ons are collected from the /usr/share/anaconda/addons/ directory in the installation runtime environment; to add your own add-on into that directory, you must create a product.img file with the same directory structure and place it on your boot media. For specific instructions on unpacking an existing boot image, creating a product.img file and repackaging the image, see Section 2, "Working with ISO Images" . [1] While Firstboot is a legacy tool, it is still supported because of third-party modules written for it. [2] In Fedora, the add-on is disabled by default. You can enable it using the inst.kdump_addon=on option in the boot menu. [3] The gui package may also contain a categories subpackage if the add-on needs to define a new category, but this is not recommended. [4] an instance of the SpokeWindow widget which is a custom widget created for the Anaconda installer [5] which is likely to change in the future to sticking to the better (GUI) way
[ "com_example_hello_world ├─ ks │ └─ __init__.py ├─ gui │ ├─ __init__.py │ └─ spokes │ └─ __init__.py └─ tui ├─ __init__.py └─ spokes └─ __init__.py", "%addon ADDON_NAME [arguments] first line second line %end", "from pyanaconda.addons import AddonData from pykickstart.options import KSOptionParser export HelloWorldData class to prevent Anaconda's collect method from taking AddonData class instead of the HelloWorldData class :see: pyanaconda.kickstart.AnacondaKSHandler.__init__ __all__ = [\"HelloWorldData\"] HELLO_FILE_PATH = \"/root/hello_world_addon_output.txt\" class HelloWorldData(AddonData): \"\"\" Class parsing and storing data for the Hello world addon. :see: pyanaconda.addons.AddonData \"\"\" def __init__(self, name): \"\"\" :param name: name of the addon :type name: str \"\"\" AddonData.__init__(self, name) self.text = \"\" self.reverse = False def handle_header(self, lineno, args): \"\"\" The handle_header method is called to parse additional arguments in the %addon section line. :param lineno: the current linenumber in the kickstart file :type lineno: int :param args: any additional arguments after %addon <name> :type args: list \"\"\" op = KSOptionParser() op.add_option(\"--reverse\", action=\"store_true\", default=False, dest=\"reverse\", help=\"Reverse the display of the addon text\") (opts, extra) = op.parse_args(args=args, lineno=lineno) # Reject any additoinal arguments. Since AddonData.handle_header # rejects any arguments, we can use it to create an error message # and raise an exception. if extra: AddonData.handle_header(self, lineno, extra) # Store the result of the option parsing self.reverse = opts.reverse def handle_line(self, line): \"\"\" The handle_line method that is called with every line from this addon's %addon section of the kickstart file. :param line: a single line from the %addon section :type line: str \"\"\" # simple example, we just append lines to the text attribute if self.text is \"\": self.text = line.strip() else: self.text += \" \" + line.strip()", "import os.path from pyanaconda.addons import AddonData from pyanaconda.constants import ROOT_PATH HELLO_FILE_PATH = \"/root/hello_world_addon_output.txt\"", "def setup(self, storage, ksdata, instclass, payload): \"\"\" The setup method that should make changes to the runtime environment according to the data stored in this object. :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet instance :param ksdata: data parsed from the kickstart file and set in the installation process :type ksdata: pykickstart.base.BaseHandler instance :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass :param payload: object managing packages and environment groups for the installation :type payload: any class inherited from the pyanaconda.packaging.Payload class \"\"\" # no actions needed in this addon pass def execute(self, storage, ksdata, instclass, users, payload): \"\"\" The execute method that should make changes to the installed system. It is called only once in the post-install setup phase. :see: setup :param users: information about created users :type users: pyanaconda.users.Users instance \"\"\" hello_file_path = os.path.normpath(ROOT_PATH + HELLO_FILE_PATH) with open(hello_file_path, \"w\") as fobj: fobj.write(\"%s\\n\" % self.text)", "def __str__(self): \"\"\" What should end up in the resulting kickstart file, i.e. the %addon section containing string representation of the stored data. \"\"\" addon_str = \"%%addon %s\" % self.name if self.reverse: addon_str += \"--reverse\" addon_str += \"\\n%s\\n%%end\" % self.text return addon_str", "will never be translated _ = lambda x: x N_ = lambda x: x the path to addons is in sys.path so we can import things from org_fedora_hello_world from org_fedora_hello_world.gui.categories.hello_world import HelloWorldCategory from pyanaconda.ui.gui.spokes import NormalSpoke export only the spoke, no helper functions, classes or constants __all__ = [\"HelloWorldSpoke\"] class HelloWorldSpoke(NormalSpoke): \"\"\" Class for the Hello world spoke. This spoke will be in the Hello world category and thus on the Summary hub. It is a very simple example of a unit for the Anaconda's graphical user interface. :see: pyanaconda.ui.common.UIObject :see: pyanaconda.ui.common.Spoke :see: pyanaconda.ui.gui.GUIObject \"\"\" ### class attributes defined by API ### # list all top-level objects from the .glade file that should be exposed # to the spoke or leave empty to extract everything builderObjects = [\"helloWorldSpokeWindow\", \"buttonImage\"] # the name of the main window widget mainWidgetName = \"helloWorldSpokeWindow\" # name of the .glade file in the same directory as this source uiFile = \"hello_world.glade\" # category this spoke belongs to category = HelloWorldCategory # spoke icon (will be displayed on the hub) # preferred are the -symbolic icons as these are used in Anaconda's spokes icon = \"face-cool-symbolic\" # title of the spoke (will be displayed on the hub) title = N_(\"_HELLO WORLD\")", "def __init__(self, data, storage, payload, instclass): \"\"\" :see: pyanaconda.ui.common.Spoke.__init__ :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass \"\"\" NormalSpoke.__init__(self, data, storage, payload, instclass) def initialize(self): \"\"\" The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize \"\"\" NormalSpoke.initialize(self) self._entry = self.builder.get_object(\"textEntry\")", "def refresh(self): \"\"\" The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh \"\"\" self._entry.set_text(self.data.addons.org_fedora_hello_world.text) def apply(self): \"\"\" The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the GUI elements. \"\"\" self.data.addons.org_fedora_hello_world.text = self._entry.get_text() def execute(self): \"\"\" The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the GUI elements. \"\"\" # nothing to do here pass", "@property def ready(self): \"\"\" The ready property that tells whether the spoke is ready (can be visited) or not. The spoke is made (in)sensitive based on the returned value. :rtype: bool \"\"\" # this spoke is always ready return True @property def completed(self): \"\"\" The completed property that tells whether all mandatory items on the spoke are set, or not. The spoke will be marked on the hub as completed or uncompleted acording to the returned value. :rtype: bool \"\"\" return bool(self.data.addons.org_fedora_hello_world.text) @property def mandatory(self): \"\"\" The mandatory property that tells whether the spoke is mandatory to be completed to continue in the installation process. :rtype: bool \"\"\" # this is an optional spoke that is not mandatory to be completed return False", "@property def status(self): \"\"\" The status property that is a brief string describing the state of the spoke. It should describe whether all values are set and if possible also the values themselves. The returned value will appear on the hub below the spoke's title. :rtype: str \"\"\" text = self.data.addons.org_fedora_hello_world.text # If --reverse was specified in the kickstart, reverse the text if self.data.addons.org_fedora_hello_world.reverse: text = text[::-1] if text: return _(\"Text set: %s\") % text else: return _(\"Text not set\")", "every GUIObject gets ksdata in __init__ dialog = HelloWorldDialog(self.data) show dialog above the lightbox with enlightbox(self.window, dialog.window): dialog.run()", "def __init__(self, app, data, storage, payload, instclass): \"\"\" :see: pyanaconda.ui.tui.base.UIScreen :see: pyanaconda.ui.tui.base.App :param app: reference to application which is a main class for TUI screen handling, it is responsible for mainloop control and keeping track of the stack where all TUI screens are scheduled :type app: instance of pyanaconda.ui.tui.base.App :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass \"\"\" NormalTUISpoke.__init__(self, app, data, storage, payload, instclass) self._entered_text = \"\" def initialize(self): \"\"\" The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize \"\"\" NormalTUISpoke.initialize(self) def refresh(self, args=None): \"\"\" The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh :see: pyanaconda.ui.tui.base.UIScreen.refresh :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :return: whether this screen requests input or not :rtype: bool \"\"\" self._entered_text = self.data.addons.org_fedora_hello_world.text return True def apply(self): \"\"\" The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the spoke. \"\"\" self.data.addons.org_fedora_hello_world.text = self._entered_text def execute(self): \"\"\" The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the spoke. \"\"\" # nothing to do here pass def input(self, args, key): \"\"\" The input method that is called by the main loop on user's input. :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :param key: user's input :type key: unicode :return: if the input should not be handled here, return it, otherwise return True or False if the input was processed succesfully or not respectively :rtype: bool|unicode \"\"\" if key: self._entered_text = key # no other actions scheduled, apply changes self.apply() # close the current screen (remove it from the stack) self.close() return True def prompt(self, args=None): \"\"\" The prompt method that is called by the main loop to get the prompt for this screen. :param args: optional argument that can be passed to App.switch_screen* methods :type args: anything :return: text that should be used in the prompt for the input :rtype: unicode|None \"\"\" return _(\"Enter a new text or leave empty to use the old one: \")", "class _EditData(object): \"\"\"Auxiliary class for storing data from the example EditSpoke\"\"\" def __init__(self): \"\"\"Trivial constructor just defining the fields that will store data\"\"\" self.checked = False self.shown_input = \"\" self.hidden_input = \"\" class HelloWorldEditSpoke(EditTUISpoke): \"\"\"Example class demonstrating usage of EditTUISpoke inheritance\"\"\" title = _(\"Hello World Edit\") category = \"localization\" # simple RE used to specify we only accept a single word as a valid input _valid_input = re.compile(r'\\w+') # special class attribute defining spoke's entries as: # Entry(TITLE, ATTRIBUTE, CHECKING_RE or TYPE, SHOW_FUNC or SHOW) # where: # TITLE specifies descriptive title of the entry # ATTRIBUTE specifies attribute of self.args that should be set to the # value entered by the user (may contain dots, i.e. may specify # a deep attribute) # CHECKING_RE specifies compiled RE used for deciding about # accepting/rejecting user's input # TYPE may be one of EditTUISpoke.CHECK or EditTUISpoke.PASSWORD used # instead of CHECKING_RE for simple checkboxes or password entries, # respectively # SHOW_FUNC is a function taking self and self.args and returning True or # False indicating whether the entry should be shown or not # SHOW is a boolean value that may be used instead of the SHOW_FUNC # # :see: pyanaconda.ui.tui.spokes.EditTUISpoke edit_fields = [ Entry(\"Simple checkbox\", \"checked\", EditTUISpoke.CHECK, True), Entry(\"Always shown input\", \"shown_input\", _valid_input, True), Entry(\"Conditioned input\", \"hidden_input\", _valid_input, lambda self, args: bool(args.shown_input)), ] def __init__(self, app, data, storage, payload, instclass): EditTUISpoke.__init__(self, app, data, storage, payload, instclass) # just populate the self.args attribute to have a store for data # typically self.data or a subtree of self.data is used as self.args self.args = _EditData() @property def completed(self): # completed if user entered something non-empty to the Conditioned input return bool(self.args.hidden_input) @property def status(self): return \"Hidden input %s\" % (\"entered\" if self.args.hidden_input else \"not entered\") def apply(self): # nothing needed here, values are set in the self.args tree pass" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/anaconda_customization_guide/sect-anaconda-addon-development
probe::vm.kmem_cache_free
probe::vm.kmem_cache_free Name probe::vm.kmem_cache_free - Fires when kmem_cache_free is requested Synopsis vm.kmem_cache_free Values caller_function Name of the caller function. call_site Address of the function calling this kmemory function ptr Pointer to the kmemory allocated which is returned by kmem_cache name Name of the probe point
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-kmem-cache-free
24.6. Using Certificate Profiles and ACLs to Issue User Certificates with the IdM CAs
24.6. Using Certificate Profiles and ACLs to Issue User Certificates with the IdM CAs Users can request certificates for themselves when permitted by the Certificate Authority access control lists (CA ACLs). The following procedures use certificate profiles and CA ACLs, which are described separately in Section 24.4, "Certificate Profiles" and Section 24.5, "Certificate Authority ACL Rules" . For more details about using certificate profiles and CA ACLs, see these sections. Issuing Certificates to Users from the Command Line Create or import a new custom certificate profile for handling requests for user certificates. For example: Add a new Certificate Authority (CA) ACL that will be used to permit requesting certificates for user entries. For example: Add the custom certificate profile to the CA ACL. Generate a certificate request for the user. For example, using OpenSSL: Run the ipa cert-request command to have the IdM CA issue a new certificate for the user. Optionally pass the --ca sub-CA_name option to the command to request the certificate from a sub-CA instead of the root CA ipa . To make sure the newly-issued certificate is assigned to the user, you can use the ipa user-show command: Issuing Certificates to Users in the Web UI Create or import a new custom certificate profile for handling requests for user certificates. Importing profiles is only possible from the command line, for example: For information about certificate profiles, see Section 24.4, "Certificate Profiles" . In the web UI, under the Authentication tab, open the CA ACLs section. Figure 24.11. CA ACL Rules Management in the Web UI Click Add at the top of the list of Certificate Authority (CA) ACLs to add a new CA ACL that permits requesting certificates for user entries. In the Add CA ACL window that opens, fill in the required information about the new CA ACL. Figure 24.12. Adding a New CA ACL Then, click Add and Edit to go directly to the CA ACL configuration page. In the CA ACL configuration page, scroll to the Profiles section and click Add at the top of the profiles list. Figure 24.13. Adding a Certificate Profile to the CA ACL Add the custom certificate profile to the CA ACL by selecting the profile and moving it to the Prospective column. Figure 24.14. Selecting a Certificate Profile Then, click Add . Scroll to the Permitted to have certificates issued section to associate the CA ACL with users or user groups. You can either add users or groups using the Add buttons, or select the Anyone option to associate the CA ACL with all users. Figure 24.15. Adding Users to the CA ACL In the Permitted to have certificates issued section, you can associate the CA ACL with one or more CAs. You can either add CAs using the Add button, or select the Any CA option to associate the CA ACL with all CAs. Figure 24.16. Adding CAs to the CA ACL At the top of the CA ACL configuration page, click Save to confirm the changes to the CA ACL. Request a new certificate for the user. Under the Identity tab and the Users subtab, choose the user for whom the certificate will be requested. Click on the user's user name to open the user entry configuration page. Click Actions at the top of the user configuration page, and then click New Certificate . Figure 24.17. Requesting a Certificate for a User Fill in the required information. Figure 24.18. Issuing a Certificate for a User Then, click Issue . After this, the newly issued certificate is visible in the user configuration page.
[ "ipa certprofile-import certificate_profile --file= certificate_profile.cfg --store=True", "ipa caacl-add users_certificate_profile --usercat=all", "ipa caacl-add-profile users_certificate_profile --certprofiles= certificate_profile", "openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout private.key -out cert.csr -subj '/CN= user '", "ipa cert-request cert.csr --principal= user --profile-id= certificate_profile", "ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA", "ipa certprofile-import certificate_profile --file= certificate_profile.txt --store=True" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/issue-user-certificates
19.6. Replication
19.6. Replication If a system is configured for two ways, active-active replication, write throughput will generally be half of what it would be in a non-replicated configuration. However, read throughput is generally improved by replication, as reads can be delivered from either storage node.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/replication1
Chapter 5. Composing a RHEL for Edge image using image builder command-line
Chapter 5. Composing a RHEL for Edge image using image builder command-line You can use image builder to create a customized RHEL for Edge image (OSTree commit). To access image builder and to create your custom RHEL for Edge image, you can either use the RHEL web console interface or the command line. For Network-based deployments, the workflow to compose RHEL for Edge images by using the CLI, involves the following high-level steps: Create a blueprint for RHEL for Edge image Create a RHEL for Edge Commit image Download the RHEL for Edge Commit image For Non-Network-based deployments, the workflow to compose RHEL for Edge images by using the CLI, involves the following high-level steps: Create a blueprint for RHEL for Edge image Create a RHEL for Edge image. You can create the following images: RHEL for Edge Commit image. RHEL for Edge Container image. RHEL for Edge Installer image. Download the RHEL for Edge image To perform the steps, use the composer-cli package. Note To run the composer-cli commands as non-root, you must be part of the weldr group or you must have administrator access to the system. 5.1. Creating images for network-based deployments This provides steps on how to build OSTree commits. These OSTree commits contain a full operating system, but are not directly bootable. To boot them, you need to deploy them using a Kickstart file. 5.1.1. Creating a blueprint for the commit image by using image builder CLI Create a blueprint for RHEL for Edge Commit image by using the CLI. Prerequisite You do not have an existing blueprint. To verify that, list the existing blueprints: Procedure Create a plain text file in the TOML format, with the following content: Where, blueprint-name is the name and blueprint-text-description is the description for your blueprint. 0.0.1 is the version number according to the Semantic Versioning scheme. Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a". Notice that currently there are no differences between packages and modules. Groups are packages groups to be installed into the image, for example the group package anaconda-tools. At this time, if you do not know the modules and groups, leave them empty. Include the required packages and customize the other details in the blueprint to suit your requirements. For every package that you want to include in the blueprint, add the following lines to the file: Where, package-name is the name of the package, such as httpd, gdb-doc, or coreutils. package-version is the version number of the package that you want to use. The package-version supports the following DNF version specifications: For a specific version, use the exact version number such as 9.0. For the latest available version, use the asterisk *. For the latest minor version, use formats such as 9.*. Push (import) the blueprint to the RHEL image builder server: List the existing blueprints to check whether the created blueprint is successfully pushed and exists. Check whether the components and versions listed in the blueprint and their dependencies are valid: Additional resources Supported Image Customizations 5.1.2. Creating a RHEL for Edge Commit image by using image builder CLI To create a RHEL for Edge Commit image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and follow the procedure. Prerequisites You have created a blueprint for RHEL for Edge Commit image. Procedure Create the RHEL for Edge Commit image. Where, blueprint-name is the RHEL for Edge blueprint name. image-type is edge-commit for network-based deployment . A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks. Check the image compose status. The output displays the status in the following format: Note The image creation process takes up to 20 minutes to complete. To interrupt the image creation process, run: To delete an existing image, run: After the image is ready, you can download it and use the image on your network deployments . Additional resources Composing a RHEL for Edge image using RHEL image builder command-line 5.1.3. Creating an image update with a ref commit by using RHEL image builder CLI If you performed a change in an existing blueprint, for example, you added a new package, and you want to update an existing RHEL for Edge image with this new package, you can use the --parent argument to generate an updated RHEL for Edge Commit (.tar) image. The --parent argument can be either a ref that exists in the repository specified by the URL argument, or you can use the Commit ID , which you can find in the extracted .tar image file. Both the ref and Commit ID arguments retrieve a parent for the new commit that you are building. RHEL image builder read information from the parent commit user database that affects parts of the new commit that you are building, but preserves UIDs and GIDs for the package-created system users and groups. Prerequisites You have updated an existing blueprint for RHEL for Edge image. You have an existing RHEL for Edge image (OSTree commit). See Extracting RHEL for Edge image commit . You made the ref that you are going to build available at the OSTree repository specified by the URL. Procedure Create the RHEL for Edge commit image: For example: To create a new RHEL for Edge commit based on a parent and with a new ref , run the following command: To create a new RHEL for Edge commit based on the same ref , run the following command: Where: The --ref argument specifies the same path value that you used to build an OSTree repository. The --parent argument specifies the parent commit. It can be ref to be resolved and pulled, for example rhel/9/x86_64/edge , or the Commit ID that you can find in the extracted .tar file. blueprint-name is the RHEL for Edge blueprint name. The --url argument specifies the URL to the OSTree repository of the commit to embed in the image, for example, http://10.0.2.2:8080/repo. image-type is edge-commit for network-based deployment . Note The --parent argument can only be used for the RHEL for Edge Commit (.tar) image type. Using the --url and --parent arguments together results in errors with the RHEL for Edge Container (.tar) image type. If you omit the parent ref argument, the system falls back to the ref specified by the --ref argument. A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks. Check the image compose status. The output displays the status in the following format: Note The image creation process takes a few minutes to complete. (Optional) To interrupt the image creation process, run: (Optional) To delete an existing image, run: steps After the image creation is complete, to upgrade an existing OSTree deployment, you need: Set up a repository. See Deploying a RHEL for Edge image . Add this repository as a remote, that is, the http or https endpoint that hosts the OSTree content. Pull the new OSTree commit onto their existing running instance. See Deploying RHEL for Edge image updates manually . Additional resources Creating a system image with RHEL image builder on the command line Downloading a RHEL for Edge image by using the RHEL image builder command-line interface 5.1.4. Downloading a RHEL for Edge image using the image builder command-line interface To download a RHEL for Edge image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You have created a RHEL for Edge image. Procedure Review the RHEL for Edge image status. The output must display the following: Download the image. RHEL image builder downloads the image as a tar file to the current directory. The UUID number and the image size is displayed alongside. The image contains a commit and a json file with information metadata about the repository content. Additional resources Deploying a RHEL for Edge image in a network-based environment 5.2. Creating images for non-network-based deployments Build a boot ISO image that installs an OSTree-based system by using the "RHEL for Edge Container" and the "RHEL for Edge Installer" images, and that can be later deployed to a device in disconnected environments. 5.2.1. Creating a RHEL for Edge Container blueprint by using image builder CLI To create a blueprint for RHEL for Edge Container image, perform the following steps: Procedure Create a plain text file in the TOML format, with the following content: Where, blueprint-name is the name and blueprint-text-description is the description for your blueprint. 0.0.1 is the version number according to the Semantic Versioning scheme. Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a". Notice that currently there are no differences between packages and modules. Groups are packages groups to be installed into the image, for example the group package anaconda-tools. At this time, if you do not know the modules and groups, leave them empty. Include the required packages and customize the other details in the blueprint to suit your requirements. For every package that you want to include in the blueprint, add the following lines to the file: Where, package-name is the name of the package, such as httpd , gdb-doc , or coreutils . package-version is the version number of the package that you want to use. The package-version supports the following dnf version specifications: For a specific version, use the exact version number such as 9.0. For the latest available version, use the asterisk *. For the latest minor version, use formats such as 9.*. Push (import) the blueprint to the RHEL image builder server: List the existing blueprints to check whether the created blueprint is successfully pushed and exists. Check whether the components and versions listed in the blueprint and their dependencies are valid: Additional resources Supported Image Customizations 5.2.2. Creating a RHEL for Edge Installer blueprint using image builder CLI You can create a blueprint to build a RHEL for Edge Installer (.iso) image, and specify user accounts to automatically create one or more users on the system at installation time. Warning When you create a user in the blueprint with the customizations.user customization, the blueprint creates the user under the /usr/lib/passwd directory and the password, under the /usr/etc/shadow directory. Note that you cannot change the password in further versions of the image in a running system using OSTree updates. The users you create with blueprints must be used only to gain access to the created system. After you access the system, you need to create users, for example, using the useradd command. To create a blueprint for RHEL for Edge Installer image, perform the following steps: Procedure Create a plain text file in the TOML format, with the following content: Where, blueprint-name is the name and blueprint-text-description is the description for your blueprint. 0.0.1 is the version number according to the Semantic Versioning scheme. Push (import) the blueprint to the RHEL image builder server: List the existing blueprints to check whether the created blueprint is successfully pushed and exists. Check whether the components and versions listed in the blueprint and their dependencies are valid: Additional resources Supported Image Customizations 5.2.3. Creating a RHEL for Edge Container image by using image builder CLI To create a RHEL for Edge Container image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and follow the procedure. Prerequisites You have created a blueprint for RHEL for Edge Container image. Procedure Create the RHEL for Edge Container image. Where, --ref is the same value that customer used to build OSTree repository --url is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. By default, the repository folder for a RHEL for Edge Container image is "/repo". See Setting up a web server to install RHEL for Edge image . To find the correct URL to use, access the running container and check the nginx.conf file. To find which URL to use, access the running container and check the nginx.conf file. Inside the nginx.conf file, find the root directory entry to search for the /repo/ folder information. Note that, if you do not specify a repository URL when creating a RHEL for Edge Container image (.tar) by using RHEL image builder, the default /repo/ entry is created in the nginx.conf file. blueprint-name is the RHEL for Edge blueprint name. image-type is edge-container for non-network-based deployment . A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks. Check the image compose status. The output displays the status in the following format: Note The image creation process takes up to 20 minutes to complete. To interrupt the image creation process, run: To delete an existing image, run: After the image is ready, it can be used for non-network deployments . See Creating a RHEL for Edge Container image for non-network-based deployments . Additional resources Composing a RHEL for Edge image using RHEL image builder command-line 5.2.4. Creating a RHEL for Edge Installer image by using the command-line interface for non-network-based deployments To create a RHEL for Edge Installer image that embeds the OSTree commit, use the RHEL image builder command-line interface, and ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You have created a blueprint for RHEL for Edge Installer image. You have created a RHEL for Edge Container image and deployed it using a web server. Procedure Begin to create the RHEL for Edge Installer image. Where, ref is the same value that customer used to build the OSTree repository URL-OSTree-repository is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo. See Creating a RHEL for Edge Container image for non-network-based deployments . blueprint-name is the RHEL for Edge Installer blueprint name. image-type is edge-installer . A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks. Check the image compose status. The command output displays the status in the following format: Note The image creation process takes a few minutes to complete. To interrupt the image creation process, run: To delete an existing image, run: After the image is ready, you can use it for non-network deployments . See Installing the RHEL for Edge image for non-network-based deployments . 5.2.5. Downloading a RHEL for Edge Installer image using the image builder CLI To download a RHEL for Edge Installer image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You have created a RHEL for Edge Installer image. Procedure Review the RHEL for Edge image status. The output must display the following: Download the image. RHEL image builder downloads the image as an .iso file to the current directory. The UUID number and the image size is displayed alongside. The resulting image is a bootable ISO image. Additional resources Deploying a RHEL for Edge image in a non-network-based environment 5.3. Supported image customizations You can customize your image by adding customizations to your blueprint, such as: Adding an additional RPM package Enabling a service Customizing a kernel command line parameter. Between others. You can use several image customizations within blueprints. By using the customizations, you can add packages and groups to the image that are not available in the default packages. To use these options, configure the customizations in the blueprint and import (push) it to RHEL image builder. Additional resources Blueprint import fails after adding filesystem customization "size" (Red Hat Knowledgebase) 5.3.1. Selecting a distribution You can use the distro field to specify the distribution to use when composing your images or solving dependencies in the blueprint. If the distro field is left blank, the blueprint automatically uses the host's operating system distribution. If you do not specify a distribution, the blueprint uses the host distribution. When you upgrade the host operating system, blueprints without a specified distribution build images by using the upgraded operating system version. You can build images for older major versions on a newer system. For example, you can use a RHEL 10 host to create RHEL 9 and RHEL 8 images. However, you cannot build images for newer major versions on an older system. Important You cannot build an operating system image that differs from the RHEL image builder host. For example, you cannot use a RHEL system to build Fedora or CentOS images. Customize the blueprint with the RHEL distribution to always build the specified RHEL image: For example: Replace " different_minor_version " to build a different minor version, for example, if you want to build a RHEL 9.5 image, use distro = "rhel-95". On RHEL 9.3 image, you can build minor versions such as RHEL 9.3, RHEL 9.2, and earlier releases. 5.3.2. Selecting a package group Customize the blueprint with package groups. The groups list describes the groups of packages that you want to install into the image. The package groups are defined in the repository metadata. Each group has a descriptive name that is used primarily for display in user interfaces, and an ID that is commonly used in Kickstart files. In this case, you must use the ID to list a group. Groups have three different ways of categorizing their packages: mandatory, default, and optional. Only mandatory and default packages are installed in the blueprints. It is not possible to select optional packages. The name attribute is a required string and must match exactly the package group id in the repositories. Note Currently, there are no differences between packages and modules in osbuild-composer . Both are treated as an RPM package dependency. Customize your blueprint with a package: Replace group_name with the name of the group. For example, anaconda-tools : 5.3.3. Setting the image hostname The customizations.hostname is an optional string that you can use to configure the final image hostname. This customization is optional, and if you do not set it, the blueprint uses the default hostname. Customize the blueprint to configure the hostname: 5.3.4. Specifying additional users Add a user to the image, and optionally, set their SSH key. All fields for this section are optional except for the name . Procedure Customize the blueprint to add a user to the image: The GID is optional and must already exist in the image. Optionally, a package creates it, or the blueprint creates the GID by using the [[customizations.group]] entry. Replace PASSWORD-HASH with the actual password hash . To generate the password hash , use a command such as: Replace the other placeholders with suitable values. Enter the name value and omit any lines you do not need. Repeat this block for every user to include. 5.3.5. Specifying additional groups Specify a group for the resulting system image. Both the name and the gid attributes are mandatory. Customize the blueprint with a group: Repeat this block for every group to include. For example: 5.3.6. Setting SSH key for existing users You can use customizations.sshkey to set an SSH key for the existing users in the final image. Both user and key attributes are mandatory. Customize the blueprint by setting an SSH key for existing users: For example: Note You can only configure the customizations.sshkey customization for existing users. To create a user and set an SSH key, see the Specifying additional users customization. 5.3.7. Appending a kernel argument You can append arguments to the boot loader kernel command line. By default, RHEL image builder builds a default kernel into the image. However, you can customize the kernel by configuring it in the blueprint. Append a kernel boot parameter option to the defaults: For example: 5.3.8. Setting time zone and NTP You can customize your blueprint to configure the time zone and the Network Time Protocol (NTP). Both timezone and ntpservers attributes are optional strings. If you do not customize the time zone, the system uses Universal Time, Coordinated (UTC). If you do not set NTP servers, the system uses the default distribution. Customize the blueprint with the timezone and the ntpservers you want: For example: Note Some image types, such as Google Cloud, already have NTP servers set up. You cannot override it because the image requires the NTP servers to boot in the selected environment. However, you can customize the time zone in the blueprint. 5.3.9. Customizing the locale settings You can customize the locale settings for your resulting system image. Both language and the keyboard attributes are mandatory. You can add many other languages. The first language you add is the primary language and the other languages are secondary. Procedure Set the locale settings: For example: To list the values supported by the languages, run the following command: To list the values supported by the keyboard, run the following command: 5.3.10. Customizing firewall Set the firewall for the resulting system image. By default, the firewall blocks incoming connections, except for services that enable their ports explicitly, such as sshd . If you do not want to use the [customizations.firewall] or the [customizations.firewall.services] , either remove the attributes, or set them to an empty list []. If you only want to use the default firewall setup, you can omit the customization from the blueprint. Note The Google and OpenStack templates explicitly disable the firewall for their environment. You cannot override this behavior by setting the blueprint. Procedure Customize the blueprint with the following settings to open other ports and services: Where ports is an optional list of strings that contain ports or a range of ports and protocols to open. You can configure the ports by using the following format: port:protocol format. You can configure the port ranges by using the portA-portB:protocol format. For example: You can use numeric ports, or their names from the /etc/services to enable or disable port lists. Specify which firewall services to enable or disable in the customizations.firewall.service section: You can check the available firewall services: For example: Note The services listed in firewall.services are different from the service-names available in the /etc/services file. 5.3.11. Enabling or disabling services You can control which services to enable during the boot time. Some image types already have services enabled or disabled to ensure that the image works correctly and you cannot override this setup. The [customizations.services] settings in the blueprint do not replace these services, but add the services to the list of services already present in the image templates. Customize which services to enable during the boot time: For example: 5.3.12. Specifying a custom file system configuration You can specify a custom file system configuration in your blueprints and therefore create images with a specific disk layout, instead of the default layout configuration. By using the non-default layout configuration in your blueprints, you can benefit from: Security benchmark compliance Protection against out-of-disk errors Improved performance Consistency with existing setups Note The OSTree systems do not support the file system customizations, because OSTree images have their own mount rule, such as read-only. The following image types are not supported: image-installer edge-installer edge-simplified-installer Additionally, the following image types do not support file system customizations, because these image types do not create partitioned operating system images: edge-commit edge-container tar container However, the following image types have support for file system customization: simplified-installer edge-raw-image edge-ami edge-vsphere With some additional exceptions for OSTree systems, you can choose arbitrary directory names at the /root level of the file system, for example: ` /local`,` /mypartition`, /USDPARTITION . In logical volumes, these changes are made on top of the LVM partitioning system. The following directories are supported: /var ,` /var/log`, and /var/lib/containers on a separate logical volume. The following are exceptions at root level: "/home": {Deny: true}, "/mnt": {Deny: true}, "/opt": {Deny: true}, "/ostree": {Deny: true}, "/root": {Deny: true}, "/srv": {Deny: true}, "/var/home": {Deny: true}, "/var/mnt": {Deny: true}, "/var/opt": {Deny: true}, "/var/roothome": {Deny: true}, "/var/srv": {Deny: true}, "/var/usrlocal": {Deny: true}, For release distributions before RHEL 8.10 and 9.5, the blueprint supports the following mountpoints and their sub-directories: / - the root mount point /var /home /opt /srv /usr /app /data /tmp From the RHEL 9.5 and 8.10 release distributions onward, you can specify arbitrary custom mountpoints, except for specific paths that are reserved for the operating system. You cannot specify arbitrary custom mountpoints on the following mountpoints and their sub-directories: /bin /boot/efi /dev /etc /lib /lib64 /lost+found /proc /run /sbin /sys /sysroot /var/lock /var/run You can customize the file system in the blueprint for the /usr custom mountpoint, but its subdirectory is not allowed. Note Customizing mount points is only supported from RHEL 9.0 distributions onward, by using the CLI. In earlier distributions, you can only specify the root partition as a mount point and specify the size argument as an alias for the image size. If you have more than one partition in the customized image, you can create images with a customized file system partition on LVM and resize those partitions at runtime. To do this, you can specify a customized file system configuration in your blueprint and therefore create images with the required disk layout. The default file system layout remains unchanged - if you use plain images without file system customization, and cloud-init resizes the root partition. The blueprint automatically converts the file system customization to an LVM partition. You can use the custom file blueprint customization to create new files or to replace existing files. The parent directory of the file you specify must exist, otherwise, the image build fails. Ensure that the parent directory exists by specifying it in the [[customizations.directories]] customization. Warning If you combine the files customizations with other blueprint customizations, it might affect the functioning of the other customizations, or it might override the current files customizations. 5.3.12.1. Specifying customized files in the blueprint With the [[customizations.files]] blueprint customization you can: Create new text files. Modifying existing files. WARNING: this can override the existing content. Set user and group ownership for the file you are creating. Set the mode permission in the octal format. You cannot create or replace the following files: /etc/fstab /etc/shadow /etc/passwd /etc/group You can create customized files and directories in your image, by using the [[customizations.files]] and the [[customizations.directories]] blueprint customizations. You can use these customizations only in the /etc directory. Note These blueprint customizations are supported by all image types, except the image types that deploy OSTree commits, such as edge-raw-image , edge-installer , and edge-simplified-installer . Warning If you use the customizations.directories with a directory path which already exists in the image with mode , user or group already set, the image build fails to prevent changing the ownership or permissions of the existing directory. 5.3.12.2. Specifying customized directories in the blueprint With the [[customizations.directories]] blueprint customization you can: Create new directories. Set user and group ownership for the directory you are creating. Set the directory mode permission in the octal format. Ensure that parent directories are created as needed. With the [[customizations.files]] blueprint customization you can: Create new text files. Modifying existing files. WARNING: this can override the existing content. Set user and group ownership for the file you are creating. Set the mode permission in the octal format. Note You cannot create or replace the following files: /etc/fstab /etc/shadow /etc/passwd /etc/group The following customizations are available: Customize the file system configuration in your blueprint: The MINIMUM-PARTITION-SIZE value has no default size format. The blueprint customization supports the following values and units: kB to TB and KiB to TiB. For example, you can define the mount point size in bytes: Define the mount point size by using units. For example: Define the minimum partition by setting minsize . For example: Create customized directories under the /etc directory for your image by using [[customizations.directories]] : The blueprint entries are described as following: path - Mandatory - enter the path to the directory that you want to create. It must be an absolute path under the /etc directory. mode - Optional - set the access permission on the directory, in the octal format. If you do not specify a permission, it defaults to 0755. The leading zero is optional. user - Optional - set a user as the owner of the directory. If you do not specify a user, it defaults to root . You can specify the user as a string or as an integer. group - Optional - set a group as the owner of the directory. If you do not specify a group, it defaults to root . You can specify the group as a string or as an integer. ensure_parents - Optional - Specify whether you want to create parent directories as needed. If you do not specify a value, it defaults to false . Create customized file under the /etc directory for your image by using [[customizations.directories]] : The blueprint entries are described as following: path - Mandatory - enter the path to the file that you want to create. It must be an absolute path under the /etc directory. mode Optional - set the access permission on the file, in the octal format. If you do not specify a permission, it defaults to 0644. The leading zero is optional. user - Optional - set a user as the owner of the file. If you do not specify a user, it defaults to root . You can specify the user as a string or as an integer. group - Optional - set a group as the owner of the file. If you do not specify a group, it defaults to root . You can specify the group as a string or as an integer. data - Optional - Specify the content of a plain text file. If you do not specify a content, it creates an empty file. 5.4. Packages installed by RHEL image builder When you create a system image by using RHEL image builder, the system installs a set of base package groups. Note When you add additional components to your blueprint, ensure that the packages in the components you added do not conflict with any other package components. Otherwise, the system fails to solve dependencies and creating your customized image fails. You can check if there is no conflict between the packages by running the command: Table 5.1. Default packages to support image type creation Image type Default Packages ami checkpolicy, chrony, cloud-init, cloud-utils-growpart, @Core, dhcp-client, gdisk, insights-client, kernel, langpacks-en, net-tools, NetworkManager, redhat-release, redhat-release-eula, rng-tools, rsync, selinux-policy-targeted, tar, yum-utils openstack @core, langpacks-en qcow2 @core, chrony, dnf, kernel, dnf, nfs-utils, dnf-utils, cloud-init, python3-jsonschema, qemu-guest-agent, cloud-utils-growpart, dracut-norescue, tar, tcpdump, rsync, dnf-plugin-spacewalk, rhn-client-tools, rhnlib, rhnsd, rhn-setup, NetworkManager, dhcp-client, cockpit-ws, cockpit-system, subscription-manager-cockpit, redhat-release, redhat-release-eula, rng-tools, insights-client tar policycoreutils, selinux-policy-targeted vhd @core, langpacks-en vmdk @core, chrony, cloud-init, firewalld, langpacks-en, open-vm-tools, selinux-policy-targeted edge-commit redhat-release , glibc , glibc-minimal-langpack , nss-altfiles , dracut-config-generic , dracut-network , basesystem , bash , platform-python , shadow-utils , chrony , setup , shadow-utils , sudo , systemd , coreutils , util-linux , curl , vim-minimal , rpm , rpm-ostree , polkit , lvm2 , cryptsetup , pinentry , e2fsprogs , dosfstools , keyutils , gnupg2 , attr , xz , gzip , firewalld , iptables , NetworkManager , NetworkManager-wifi , NetworkManager-wwan , wpa_supplicant , traceroute , hostname , iproute , iputils , openssh-clients , procps-ng , rootfiles , openssh-server , passwd , policycoreutils , policycoreutils-python-utils , selinux-policy-targeted , setools-console , less , tar , rsync , usbguard , bash-completion , tmux , ima-evm-utils , audit , podman , containernetworking-plugins , container-selinux , skopeo , criu , slirp4netns , fuse-overlayfs , clevis , clevis-dracut , clevis-luks , greenboot , greenboot-default-health-checks , fdo-client , fdo-owner-cli , sos , edge-container dnf, dosfstools, e2fsprogs, glibc, lorax-templates-generic, lorax-templates-rhel, lvm2, policycoreutils, python36, python3-iniparse, qemu-img, selinux-policy-targeted, systemd, tar, xfsprogs, xz edge-installer aajohan-comfortaa-fonts, abattis-cantarell-fonts, alsa-firmware, alsa-tools-firmware, anaconda, anaconda-install-env-deps, anaconda-widgets, audit, bind-utils, bitmap-fangsongti-fonts, bzip2, cryptsetup, dbus-x11, dejavu-sans-fonts, dejavu-sans-mono-fonts, device-mapper-persistent-data, dnf, dump, ethtool, fcoe-utils, ftp, gdb-gdbserver, gdisk, gfs2-utils, glibc-all-langpacks, google-noto-sans-cjk-ttc-fonts, gsettings-desktop-schemas, hdparm, hexedit, initscripts, ipmitool, iwl3945-firmware, iwl4965-firmware, iwl6000g2a-firmware, iwl6000g2b-firmware, jomolhari-fonts, kacst-farsi-fonts, kacst-qurn-fonts, kbd, kbd-misc, kdump-anaconda-addon, khmeros-base-fonts, libblockdev-lvm-dbus, libertas-sd8686-firmware, libertas-sd8787-firmware, libertas-usb8388-firmware, libertas-usb8388-olpc-firmware, libibverbs, libreport-plugin-bugzilla, libreport-plugin-reportuploader, libreport-rhel-anaconda-bugzilla, librsvg2, linux-firmware, lklug-fonts, lldpad, lohit-assamese-fonts, lohit-bengali-fonts, lohit-devanagari-fonts, lohit-gujarati-fonts, lohit-gurmukhi-fonts, lohit-kannada-fonts, lohit-odia-fonts, lohit-tamil-fonts, lohit-telugu-fonts, lsof, madan-fonts, metacity, mtr, mt-st, net-tools, nmap-ncat, nm-connection-editor, nss-tools, openssh-server, oscap-anaconda-addon, pciutils, perl-interpreter, pigz, python3-pyatspi, rdma-core, redhat-release-eula, rpm-ostree, rsync, rsyslog, sg3_utils, sil-abyssinica-fonts, sil-padauk-fonts, sil-scheherazade-fonts, smartmontools, smc-meera-fonts, spice-vdagent, strace, system-storage-manager, thai-scalable-waree-fonts, tigervnc-server-minimal, tigervnc-server-module, udisks2, udisks2-iscsi, usbutils, vim-minimal, volume_key, wget, xfsdump, xorg-x11-drivers,xorg-x11-fonts-misc,xorg-x11-server-utils,xorg-x11-server-Xorg, xorg-x11-xauth edge-simplified-installer attr, basesystem, binutils, bsdtar, clevis-dracut, clevis-luks, cloud-utils-growpart, coreos-installer, coreos-installer-dracut, coreutils, device-mapper-multipath, dnsmasq, dosfstools, dracut-live, e2fsprogs, fcoe-utils, fdo-init, gzip, ima-evm-utils, iproute, iptables, iputils, iscsi-initiator-utils, keyutils, lldpad, lvm2, passwd, policycoreutils, policycoreutils-python-utils, procps-ng, rootfiles, setools-console, sudo, traceroute, util-linux image-installer aajohan-comfortaa-fonts , abattis-cantarell-fonts , alsa-firmware , alsa-tools-firmware , anaconda , anaconda-dracut , anaconda-install-env-deps , anaconda-widgets , audit , bind-utils , bitmap-fangsongti-fonts , bzip2 , cryptsetup , curl , dbus-x11 , dejavu-sans-fonts , dejavu-sans-mono-fonts , device-mapper-persistent-data , dmidecode , dnf , dracut-config-generic , dracut-network , efibootmgr , ethtool , fcoe-utils , ftp , gdb-gdbserver , gdisk , glibc-all-langpacks , gnome-kiosk , google-noto-sans-cjk-ttc-fonts , grub2-tools , grub2-tools-extra , grub2-tools-minimal , grubby , gsettings-desktop-schemas , hdparm , hexedit , hostname , initscripts , ipmitool , iwl1000-firmware , iwl100-firmware , iwl105-firmware , iwl135-firmware , iwl2000-firmware , iwl2030-firmware , iwl3160-firmware , iwl5000-firmware , iwl5150-firmware , iwl6000g2a-firmware , iwl6000g2b-firmware , iwl6050-firmware , iwl7260-firmware , jomolhari-fonts , kacst-farsi-fonts , kacst-qurn-fonts , kbd , kbd-misc , kdump-anaconda-addon , kernel , khmeros-base-fonts , less , libblockdev-lvm-dbus , libibverbs , libreport-plugin-bugzilla , libreport-plugin-reportuploader , librsvg2 , linux-firmware , lklug-fonts , lldpad , lohit-assamese-fonts , lohit-bengali-fonts , lohit-devanagari-fonts , lohit-gujarati-fonts , lohit-gurmukhi-fonts , lohit-kannada-fonts , lohit-odia-fonts , lohit-tamil-fonts , lohit-telugu-fonts , lsof , madan-fonts , mtr , mt-st , net-tools , nfs-utils , nmap-ncat , nm-connection-editor , nss-tools , openssh-clients , openssh-server , oscap-anaconda-addon , ostree , pciutils , perl-interpreter , pigz , plymouth , prefixdevname , python3-pyatspi , rdma-core , redhat-release-eula , rng-tools , rpcbind , rpm-ostree , rsync , rsyslog , selinux-policy-targeted , sg3_utils , sil-abyssinica-fonts , sil-padauk-fonts , sil-scheherazade-fonts , smartmontools , smc-meera-fonts , spice-vdagent , strace , systemd , tar , thai-scalable-waree-fonts , tigervnc-server-minimal , tigervnc-server-module , udisks2 , udisks2-iscsi , usbutils , vim-minimal , volume_key , wget , xfsdump , xfsprogs , xorg-x11-drivers , xorg-x11-fonts-misc , xorg-x11-server-utils , xorg-x11-server-Xorg , xorg-x11-xauth , xz , edge-raw-image dnf, dosfstools, e2fsprogs, glibc, lorax-templates-generic, lorax-templates-rhel, lvm2, policycoreutils, python36, python3-iniparse, qemu-img, selinux-policy-targeted, systemd, tar, xfsprogs, xz gce @core, langpacks-en, acpid, dhcp-client, dnf-automatic, net-tools, python3, rng-tools, tar, vim Additional resources RHEL image builder description
[ "sudo composer-cli blueprints list", "name = \"blueprint-name\" description = \"blueprint-text-description\" version = \"0.0.1\" modules = [ ] groups = [ ]", "[[packages]] name = \"package-name\" version = \"package-version\"", "composer-cli blueprints push blueprint-name.toml", "composer-cli blueprints show BLUEPRINT-NAME", "composer-cli blueprints depsolve blueprint-name", "composer-cli compose start blueprint-name image-type", "composer-cli compose status", "<UUID> RUNNING date blueprint-name blueprint-version image-type", "composer-cli compose cancel <UUID>", "composer-cli compose delete <UUID>", "composer-cli compose start-ostree --ref rhel/9/x86_64/edge --parent parent-OSTree-REF --url URL blueprint-name image-type", "composer-cli compose start-ostree --ref rhel/9/x86_64/edge --parent rhel/9/x86_64/edge --url http://10.0.2.2:8080/repo rhel_update edge-commit", "composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url http://10.0.2.2:8080/repo rhel_update edge-commit", "composer-cli compose status", "<UUID> RUNNING date blueprint-name blueprint-version image-type", "composer-cli compose cancel <UUID>", "composer-cli compose delete <UUID>", "composer-cli compose status", "<UUID> FINISHED date blueprint-name blueprint-version image-type", "composer-cli compose image <UUID>", "<UUID> -commit.tar: size MB", "name = \"blueprint-name\" description = \"blueprint-text-description\" version = \"0.0.1\" modules = [ ] groups = [ ]", "[[packages]] name = \"package-name\" version = \"package-version\"", "composer-cli blueprints push blueprint-name.toml", "composer-cli blueprints show BLUEPRINT-NAME", "composer-cli blueprints depsolve blueprint-name", "name = \"blueprint-installer\" description = \"blueprint-for-installer-image\" version = \"0.0.1\" [[customizations.user]] name = \" user \" description = \" account \" password = \" user-password \" key = \" user-ssh-key \" home = \" path \" groups = [\" user-groups \"]", "composer-cli blueprints push blueprint-name.toml", "composer-cli blueprints show blueprint-name", "composer-cli blueprints depsolve blueprint-name", "composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url URL-OSTree-repository blueprint-name image-type", "composer-cli compose status", "<UUID> RUNNING date blueprint-name blueprint-version image-type", "composer-cli compose cancel <UUID>", "composer-cli compose delete <UUID>", "composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url URL-OSTree-repository blueprint-name image-type", "composer-cli compose status", "<UUID> RUNNING date blueprint-name blueprint-version image-type", "composer-cli compose cancel <UUID>", "composer-cli compose delete <UUID>", "composer-cli compose status", "<UUID> FINISHED date blueprint-name blueprint-version image-type", "composer-cli compose image <UUID>", "<UUID> -boot.iso: size MB", "name = \" blueprint_name \" description = \" blueprint_version \" version = \"0.1\" distro = \" different_minor_version \"", "name = \"tmux\" description = \"tmux image with openssh\" version = \"1.2.16\" distro = \"rhel-9.5\"", "[[groups]] name = \" group_name \"", "[[groups]] name = \"anaconda-tools\"", "[customizations] hostname = \"baseimage\"", "[[customizations.user]] name = \" USER-NAME \" description = \" USER-DESCRIPTION \" password = \" PASSWORD-HASH \" key = \" PUBLIC-SSH-KEY \" home = \"/home/ USER-NAME /\" shell = \" /usr/bin/bash \" groups = [ \"users\", \"wheel\" ] uid = NUMBER gid = NUMBER", "[[customizations.user]] name = \"admin\" description = \"Administrator account\" password = \"USD6USDCHO2USD3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L...\" key = \"PUBLIC SSH KEY\" home = \"/srv/widget/\" shell = \"/usr/bin/bash\" groups = [\"widget\", \"users\", \"wheel\"] uid = 1200 gid = 1200 expiredate = 12345", "python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "[[customizations.group]] name = \" GROUP-NAME \" gid = NUMBER", "[[customizations.group]] name = \"widget\" gid = 1130", "[[customizations.sshkey]] user = \" root \" key = \" PUBLIC-SSH-KEY \"", "[[customizations.sshkey]] user = \"root\" key = \"SSH key for root\"", "[customizations.kernel] append = \" KERNEL-OPTION \"", "[customizations.kernel] name = \"kernel-debug\" append = \"nosmt=force\"", "[customizations.timezone] timezone = \" TIMEZONE \" ntpservers = \" NTP_SERVER \"", "[customizations.timezone] timezone = \"US/Eastern\" ntpservers = [\"0.north-america.pool.ntp.org\", \"1.north-america.pool.ntp.org\"]", "[customizations.locale] languages = [\"LANGUAGE\"] keyboard = \" KEYBOARD \"", "[customizations.locale] languages = [\"en_US.UTF-8\"] keyboard = \"us\"", "localectl list-locales", "localectl list-keymaps", "[customizations.firewall] ports = [\"PORTS\"]", "[customizations.firewall] ports = [\"22:tcp\", \"80:tcp\", \"imap:tcp\", \"53:tcp\", \"53:udp\", \"30000-32767:tcp\", \"30000-32767:udp\"]", "[customizations.firewall.services] enabled = [\"SERVICES\"] disabled = [\"SERVICES\"]", "firewall-cmd --get-services", "[customizations.firewall.services] enabled = [\"ftp\", \"ntp\", \"dhcp\"] disabled = [\"telnet\"]", "[customizations.services] enabled = [\"SERVICES\"] disabled = [\"SERVICES\"]", "[customizations.services] enabled = [\"sshd\", \"cockpit.socket\", \"httpd\"] disabled = [\"postfix\", \"telnetd\"]", "[[customizations.filesystem]] mountpoint = \"MOUNTPOINT\" minsize = MINIMUM-PARTITION-SIZE", "[[customizations.filesystem]] mountpoint = \"/var\" minsize = 1073741824", "[[customizations.filesystem]] mountpoint = \"/opt\" minsize = \"20 GiB\"", "[[customizations.filesystem]] mountpoint = \"/boot\" minsize = \"1 GiB\"", "[[customizations.filesystem]] mountpoint = \"/var\" minsize = 2147483648", "[[customizations.directories]] path = \"/etc/ directory_name \" mode = \" octal_access_permission \" user = \" user_string_or_integer \" group = \" group_string_or_integer \" ensure_parents = boolean", "[[customizations.files]] path = \"/etc/ directory_name \" mode = \" octal_access_permission \" user = \" user_string_or_integer \" group = \" group_string_or_integer \" data = \"Hello world!\"", "composer-cli blueprints depsolve BLUEPRINT-NAME" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/composing-a-rhel-for-edge-image-using-image-builder-command-line_composing-installing-managing-rhel-for-edge-images
Chapter 2. Configuring your firewall
Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_configuration/configuring-firewall
Deploying Distributed Compute Nodes with Separate Heat Stacks
Deploying Distributed Compute Nodes with Separate Heat Stacks Red Hat OpenStack Platform 16.0 Using separate heat stacks to manage your Red Hat Openstack Platform OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_distributed_compute_nodes_with_separate_heat_stacks/index
3.9. Other XFS File System Utilities
3.9. Other XFS File System Utilities Red Hat Enterprise Linux 7 also features other utilities for managing XFS file systems: xfs_fsr Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr defragments all regular files in all mounted XFS file systems. This utility also allows users to suspend a defragmentation at a specified time and resume from where it left off later. In addition, xfs_fsr also allows the defragmentation of only one file, as in xfs_fsr /path/to/file . Red Hat advises not to periodically defrag an entire file system because XFS avoids fragmentation by default. System wide defragmentation could cause the side effect of fragmentation in free space. xfs_bmap Prints the map of disk blocks used by files in an XFS filesystem. This map lists each extent used by a specified file, as well as regions in the file with no corresponding blocks (that is, holes). xfs_info Prints XFS file system information. xfs_admin Changes the parameters of an XFS file system. The xfs_admin utility can only modify parameters of unmounted devices or file systems. xfs_copy Copies the contents of an entire XFS file system to one or more targets in parallel. The following utilities are also useful in debugging and analyzing XFS file systems: xfs_metadump Copies XFS file system metadata to a file. Red Hat only supports using the xfs_metadump utility to copy unmounted file systems or read-only mounted file systems; otherwise, generated dumps could be corrupted or inconsistent. xfs_mdrestore Restores an XFS metadump image (generated using xfs_metadump ) to a file system image. xfs_db Debugs an XFS file system. For more information about these utilities, see their respective man pages.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/xfsothers
Chapter 248. OpenShift Component (deprecated)
Chapter 248. OpenShift Component (deprecated) Available as of Camel version 2.14 The openshift component is a component for managing your OpenShift applications. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openshift</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 248.1. URI format openshift:clientId[?options] You can append query options to the URI in the following format, ?option=value&option=value&... 248.2. Options The OpenShift component supports 5 options, which are listed below. Name Description Default Type username (security) The username to login to openshift server. String password (security) The password for login to openshift server. String domain (common) Domain name. If not specified then the default domain is used. String server (common) Url to the openshift server. If not specified then the default value from the local openshift configuration file /.openshift/express.conf is used. And if that fails as well then openshift.redhat.com is used. String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The OpenShift endpoint is configured using URI syntax: with the following path and query parameters: 248.2.1. Path Parameters (1 parameters): Name Description Default Type clientId Required The client id String 248.2.2. Query Parameters (26 parameters): Name Description Default Type domain (common) Domain name. If not specified then the default domain is used. String password (common) Required The password for login to openshift server. String server (common) Url to the openshift server. If not specified then the default value from the local openshift configuration file /.openshift/express.conf is used. And if that fails as well then openshift.redhat.com is used. String username (common) Required The username to login to openshift server. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy application (producer) The application name to start, stop, restart, or get the state. String mode (producer) Whether to output the message body as a pojo or json. For pojo the message is a List type. String operation (producer) The operation to perform which can be: list, start, stop, restart, and state. The list operation returns information about all the applications in json format. The state operation returns the state such as: started, stopped etc. The other operations does not return any value. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 248.3. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.openshift.domain Domain name. If not specified then the default domain is used. String camel.component.openshift.enabled Enable openshift component true Boolean camel.component.openshift.password The password for login to openshift server. String camel.component.openshift.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.openshift.server Url to the openshift server. If not specified then the default value from the local openshift configuration file /.openshift/express.conf is used. And if that fails as well then openshift.redhat.com is used. String camel.component.openshift.username The username to login to openshift server. String 248.4. Examples 248.4.1. Listing all applications // sending route from("direct:apps") .to("openshift:myClient?username=foo&password=secret&operation=list"); .to("log:apps"); In this case the information about all the applications is returned as pojo. If you want a json response, then set mode=json. 248.4.2. Stopping an application // stopping the foobar application from("direct:control") .to("openshift:myClient?username=foo&password=secret&operation=stop&application=foobar"); In the example above we stop the application named foobar. Polling for gear state changes The consumer is used for polling state changes in gears. Such as when a new gear is added/removed/ or its lifecycle is changed, eg started, or stopped etc. // trigger when state changes on our gears from("openshift:myClient?username=foo&password=secret&delay=30s") .log("Event USD{header.CamelOpenShiftEventType} on application USD{body.name} changed state to USD{header.CamelOpenShiftEventNewState}"); When the consumer emits an Exchange then the body contains the com.openshift.client.IApplication as the message body. And the following headers is included. Header May be null Description CamelOpenShiftEventType No The type of the event which can be one of: added, removed or changed. CamelOpenShiftEventOldState Yes The old state, when the event type is changed. CamelOpenShiftEventNewState No The new state, for any of the event types 248.5. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openshift</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "openshift:clientId[?options]", "openshift:clientId", "// sending route from(\"direct:apps\") .to(\"openshift:myClient?username=foo&password=secret&operation=list\"); .to(\"log:apps\");", "// stopping the foobar application from(\"direct:control\") .to(\"openshift:myClient?username=foo&password=secret&operation=stop&application=foobar\");", "// trigger when state changes on our gears from(\"openshift:myClient?username=foo&password=secret&delay=30s\") .log(\"Event USD{header.CamelOpenShiftEventType} on application USD{body.name} changed state to USD{header.CamelOpenShiftEventNewState}\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/openshift-component
Preface
Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/release_notes_for_red_hat_build_of_apache_camel_for_quarkus/pr01
Appendix C. LVM Selection Criteria
Appendix C. LVM Selection Criteria As of Red Hat Enterprise Linux release 7.1, many LVM reporting commands accept the -S or --select option to define selection criteria for those commands. As of Red Hat Enterprise Linux release 7.2, many processing commands support selection criteria as well. These two categories of commands for which you can define selection criteria are defined as follows: Reporting commands - Display only the lines that satisfy the selection criteria. Examples of reporting commands for which you can define selection criteria include pvs , vgs , lvs , pvdisplay , vgdisplay , lvdisplay , lvm devtypes , and dmsetup info -c . Specifying the -o selected option in addition to the -S option displays all rows and adds a "selected" column that shows 1 if the row matches the selection criteria and 0 if it does not. Processing commands - Process only the items that satisfy the selection criteria. Examples of processing commands for which you can define selection criteria include pvchange , vgchange , lvchange , vgimport , vgexport , vgremove , and lvremove . Selection criteria are a set of statements that use comparison operators to define the valid values for particular fields to display or process. The selected fields are, in turn, combined by logical and grouping operators. When specifying which fields to display using selection criteria, there is no requirement for the field which is in the selection criteria to be displayed. The selection criteria can contain one set of fields while the output can contain a different set of fields. For a listing of available fields for the various LVM components, see Section C.3, "Selection Criteria Fields" . For a listing of allowed operators, see Section C.2, "Selection Criteria Operators" . The operators are also provided on the lvm(8) man page. You can also see full sets of fields and possible operators by specifying the help (or ? ) keyword for the -S/--select option of a reporting commands. For example, the following command displays the fields and possible operators for the lvs command. For the Red Hat Enterprise Linux 7.2 release, you can specify time values as selection criteria for fields with a field type of time . For information on specifying time values, see Section C.4, "Specifying Time Values" . C.1. Selection Criteria Field Types The fields you specify for selection criteria are of a particular type. The help output for each field display the field type enclosed in brackets. The following help output examples show the output indicating the field types string , string_list , number , percent , size and time . Table C.1, "Selection Criteria Field Types" describes the selection criteria field types Table C.1. Selection Criteria Field Types Field Type Description number Non-negative integer value. size Floating point value with units, 'm' unit used by default if not specified. percent Non-negative integer with or without % suffix. string Characters quoted by ' or " or unquoted. string list Strings enclosed by [ ] or { } and elements delimited by either "all items must match" or "at least one item must match" operator. The values you specify for a field can be the following: Concrete values of the field type Regular expressions that include any fields of the string field type, such as "+~" operator. Reserved values; for example -1, unknown, undefined, undef are all keywords to denote an undefined numeric value. Defined synonyms for the field values, which can be used in selection criteria for values just as for their original values. For a listing of defined synonyms for field values, see Table C.14, "Selection Criteria Synonyms" .
[ "lvs -S help", "lv_name - Name. LVs created for internal use are enclosed in brackets.[string] lv_role - LV role. [string list] raid_mismatch_count - For RAID, number of mismatches found or repaired. [number] copy_percent - For RAID, mirrors and pvmove, current percentage in-sync. [percent] lv_size - Size of LV in current units. [size] lv_time - Creation time of the LV, if known [time]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/selection_criteria
2.41. RHEA-2011:0585 - new package: spice-protocol
2.41. RHEA-2011:0585 - new package: spice-protocol A new spice-protocol package is now available for Red Hat Enterprise Linux 6. The spice-protocol package contains header files that describe the SPICE protocol and the QXL para-virtualized graphics card. Spice-protocol is needed to build newer versions of the spice-client and spice-server packages. This enhancement update adds the spice-protocol package to Red Hat Enterprise Linux 6. (BZ# 662992 ) Users who wish to build SPICE from source are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/spice-protocol_new
function::cmdline_args
function::cmdline_args Name function::cmdline_args - Fetch command line arguments from current process Synopsis Arguments n First argument to get (zero is the command itself) m Last argument to get (or minus one for all arguments after n) delim String to use to delimit arguments when more than one. General Syntax cmdline_args:string(n:long, m:long, delim:string) Description Returns arguments from the current process starting with argument number n, up to argument m. If there are less than n arguments, or the arguments cannot be retrieved from the current process, the empty string is returned. If m is smaller than n then all arguments starting from argument n are returned. Argument zero is traditionally the command itself.
[ "function cmdline_args:string(n:long,m:long,delim:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-cmdline-args
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/logging_configuration/making-open-source-more-inclusive
Chapter 3. Deployment of the Ceph File System
Chapter 3. Deployment of the Ceph File System As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs. Basically, the deployment workflow is three steps: Create Ceph File Systems on a Ceph Monitor node. Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted. Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client. 3.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). 3.2. Layout, quota, snapshot, and network restrictions These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements. Important All user capability flags, except rw , must be specified in alphabetical order. Layouts and Quotas When using layouts or quotas, clients require the p flag, in addition to rw capabilities. Setting the p flag restricts all the attributes being set by special extended attributes, those with a ceph. prefix. Also, this restricts other means of setting these fields, such as openc operations with layouts. Example In this example, client.0 can modify layouts and quotas on the file system cephfs_a , but client.1 cannot. Snapshots When creating or deleting snapshots, clients require the s flag, in addition to rw capabilities. When the capability string also contains the p flag, the s flag must appear after it. Example In this example, client.0 can create or delete snapshots in the temp directory of file system cephfs_a . Network Restricting clients connecting from a particular network. Example The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16 . Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities. 3.3. Creating Ceph File Systems You can create multiple Ceph File Systems (CephFS) on a Ceph Monitor node. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). Root-level access to a Ceph Monitor node. Root-level access to a Ceph client node. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-fuse package: Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Set the appropriate permissions for the configuration file: Create a Ceph File System: Syntax Example Repeat this step to create additional file systems. Note By running this command, Ceph automatically creates the new pools, and deploys a new Ceph Metadata Server (MDS) daemon to support the new file system. This also configures the MDS affinity accordingly. Verify access to the new Ceph File System from a Ceph client. Authorize a Ceph client to access the new file system: Syntax Important The supported values for PERMISSIONS are r (read) and rw (read/write). Example Note Optionally, you can add a safety measure by specifying the root_squash option. This prevents accidental deletion scenarios by disallowing clients with a uid=0 or gid=0 to do write operations, but still allows read operations. Example In this example, root_squash is enabled for the file system cephfs01 , except within the /volumes directory tree. Important The Ceph client can only see the CephFS it is authorized for. Copy the Ceph user's keyring to the Ceph client node: Syntax Example On the Ceph client node, create a new directory: Syntax Example On the Ceph client node, mount the new Ceph File System: Syntax Example On the Ceph client node, list the directory contents of the new mount point, or create a file on the new mount point. Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Mounting the Ceph File System as a kernel client section in the Red Hat Ceph Storage File System Guide for more details. See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage File System Guide for more details. See Ceph File System limitations and the POSIX standards section in the Red Hat Ceph Storage File System Guide for more details. See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details. 3.4. Adding an erasure-coded pool to a Ceph File System By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool to the Ceph File System, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools. Important CephFS EC pools are for archival purpose only. Important For production environments, Red Hat recommends using the default replicated data pool for CephFS. The creation of inodes in CephFS creates at least one object in the default data pool. It is better to use a replicated pool for the default data to improve small-object write performance, and to improve read performance for updating backtraces. Prerequisites A running Red Hat Ceph Storage cluster. An existing Ceph File System. Pools using BlueStore OSDs. Root-level access to a Ceph Monitor node. Installation of the attr package. Procedure Create an erasure-coded data pool for CephFS: Syntax Example Verify the pool was added: Example Enable overwrites on the erasure-coded pool: Syntax Example Verify the status of the Ceph File System: Syntax Example Add the erasure-coded data pool to the existing CephFS: Syntax Example This example adds the new data pool, cephfs-data-ec01 , to the existing erasure-coded file system, cephfs-ec . Verify that the erasure-coded pool was added to the Ceph File System: Syntax Example Set the file layout on a new directory: Syntax Example In this example, all new files created in the /mnt/cephfs/newdir directory inherit the directory layout and places the data in the newly added erasure-coded pool. Additional Resources See The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information about CephFS MDS. See the Creating Ceph File Systems section in the Red Hat Ceph Storage File System Guide for more information. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information. See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information. 3.5. Creating client users for a Ceph File System Red Hat Ceph Storage uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted. Prerequisites A running Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon (ceph-mds). Root-level access to a Ceph Monitor node. Root-level access to a Ceph client node. Procedure Log into the Cephadm shell on the monitor node: Example On a Ceph Monitor node, create a client user: Syntax To restrict the client to only writing in the temp directory of filesystem cephfs_a : Example To completely restrict the client to the temp directory, remove the root ( / ) directory: Example Note Supplying all or asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell. Verify the created key: Syntax Example Copy the keyring to the client. On the Ceph Monitor node, export the keyring to a file: Syntax Example Copy the client keyring from the Ceph Monitor node to the /etc/ceph/ directory on the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client node name or IP. Example From the client node, set the appropriate permissions for the keyring file: Syntax Example Additional Resources See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details. 3.6. Mounting the Ceph File System as a kernel client You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot. Important Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution. Prerequisites Root-level access to a Linux-based client node. Root-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-common package: Log into the Cephadm shell on the monitor node: Example Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example From the client node, set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting Create a mount directory on the client node: Syntax Example Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the mount command, specify the mount point, and set the client name: Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Syntax Example Note You can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the mount command, instead of supplying a comma-separated list. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Note You can set the nowsync option to asynchronously execute file creation and removal on the Red Hat Ceph Storage clusters. This improves the performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. The nowsync option requires kernel clients with Red Hat Enterprise Linux 8.4 or later. Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client host, create a new directory for mounting the Ceph File System. Syntax Example Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, ceph , for CephFS. The fourth column sets the various options, such as, the user name and the secret file using the name and secretfile options. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Additional Resources See the mount(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details. 3.7. Mounting the Ceph File System as a FUSE client You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot. Prerequisites Root-level access to a Linux-based client node. Root-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-fuse package: Log into the Cephadm shell on the monitor node: Example Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example From the client node, set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by the path . Use the ceph-fuse utility to mount the Ceph File System. Syntax Example Note If you do not use the default name and location of the user keyring, that is /etc/ceph/ceph.client. CLIENT_ID .keyring , then use the --keyring option to specify the path to the user keyring, for example: Example Note Use the -r option to instruct the client to treat that path as its root: Syntax Example Note If you want to automatically reconnect an evicted Ceph client, then add the --client_reconnect_stale=true option. Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by the path . Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, fuse.ceph , for CephFS. The fourth column sets the various options, such as the user name and the keyring using the ceph.name and ceph.keyring options. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. To specify which Ceph File System to access, use the ceph.client_fs option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. If you want to automatically reconnect after an eviction, then set the client_reconnect_stale=true option. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Additional Resources The ceph-fuse(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details. 3.8. Additional Resources See Section 2.6, "Management of MDS service using the Ceph Orchestrator" to install Ceph Metadata servers. See Section 3.3, "Creating Ceph File Systems" for details. See Section 3.5, "Creating client users for a Ceph File System" for details. See Section 3.6, "Mounting the Ceph File System as a kernel client" for details. See Section 3.7, "Mounting the Ceph File System as a FUSE client" for details. See Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS Metadata Server daemon.
[ "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/", "scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "ceph fs volume create FILE_SYSTEM_NAME", "ceph fs volume create cephfs01", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS", "ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME :/etc/ceph", "ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00", "mkdir PATH_TO_NEW_DIRECTORY_NAME", "mkdir /mnt/mycephfs", "ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME", "ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse", "ceph osd pool create DATA_POOL_NAME erasure", "ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created", "ceph osd lspools", "ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true", "ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME", "ceph fs add_data_pool cephfs-ec cephfs-data-ec01", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY", "mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir", "cephadm shell", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ]", "ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==", "ceph fs authorize cephfs_a client.1 /temp rw", "ceph auth get client. ID", "ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = \"allow r, allow rw path=/temp\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "chmod 644 ceph.client. ID .keyring", "chmod 644 /etc/ceph/ceph.client.1.keyring", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID ,fs= FILE_SYSTEM_NAME", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "#DEVICE PATH TYPE OPTIONS MON_0_HOST : PORT , MOUNT_POINT ceph name= CLIENT_ID , MON_1_HOST : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , fs= FILE_SYSTEM_NAME , MON_2_HOST : PORT :/q[_VOL_]/ SUB_VOL / UID_SUB_VOL , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT", "ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs", "ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH", "ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs", "ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME : PORT , MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME : PORT :/ ceph.client_fs= FILE_SYSTEM_NAME ,ceph.name= USERNAME ,ceph.keyring=/etc/ceph/ KEYRING_FILE , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/deployment-of-the-ceph-file-system
C.2.2. Non-typed Child Resource Start and Stop Ordering
C.2.2. Non-typed Child Resource Start and Stop Ordering Additional considerations are required for non-typed child resources. For a non-typed child resource, starting order and stopping order are not explicitly specified by the Service resource. Instead, starting order and stopping order are determined according to the order of the child resource in /etc/cluster/cluster.conf . Additionally, non-typed child resources are started after all typed child resources and stopped before any typed child resources. For example, consider the starting order and stopping order of the non-typed child resources in Example C.4, "Non-typed and Typed Child Resource in a Service" . Example C.4. Non-typed and Typed Child Resource in a Service Non-typed Child Resource Starting Order In Example C.4, "Non-typed and Typed Child Resource in a Service" , the child resources are started in the following order: lvm:1 - This is an LVM resource. All LVM resources are started first. lvm:1 ( <lvm name="1" .../> ) is the first LVM resource started among LVM resources because it is the first LVM resource listed in the Service foo portion of /etc/cluster/cluster.conf . lvm:2 - This is an LVM resource. All LVM resources are started first. lvm:2 ( <lvm name="2" .../> ) is started after lvm:1 because it is listed after lvm:1 in the Service foo portion of /etc/cluster/cluster.conf . fs:1 - This is a File System resource. If there were other File System resources in Service foo , they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf . ip:10.1.1.1 - This is an IP Address resource. If there were other IP Address resources in Service foo , they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf . script:1 - This is a Script resource. If there were other Script resources in Service foo , they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf . nontypedresource:foo - This is a non-typed resource. Because it is a non-typed resource, it is started after the typed resources start. In addition, its order in the Service resource is before the other non-typed resource, nontypedresourcetwo:bar ; therefore, it is started before nontypedresourcetwo:bar . (Non-typed resources are started in the order that they appear in the Service resource.) nontypedresourcetwo:bar - This is a non-typed resource. Because it is a non-typed resource, it is started after the typed resources start. In addition, its order in the Service resource is after the other non-typed resource, nontypedresource:foo ; therefore, it is started after nontypedresource:foo . (Non-typed resources are started in the order that they appear in the Service resource.) Non-typed Child Resource Stopping Order In Example C.4, "Non-typed and Typed Child Resource in a Service" , the child resources are stopped in the following order: nontypedresourcetwo:bar - This is a non-typed resource. Because it is a non-typed resource, it is stopped before the typed resources are stopped. In addition, its order in the Service resource is after the other non-typed resource, nontypedresource:foo ; therefore, it is stopped before nontypedresource:foo . (Non-typed resources are stopped in the reverse order that they appear in the Service resource.) nontypedresource:foo - This is a non-typed resource. Because it is a non-typed resource, it is stopped before the typed resources are stopped. In addition, its order in the Service resource is before the other non-typed resource, nontypedresourcetwo:bar ; therefore, it is stopped after nontypedresourcetwo:bar . (Non-typed resources are stopped in the reverse order that they appear in the Service resource.) script:1 - This is a Script resource. If there were other Script resources in Service foo , they would stop in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . ip:10.1.1.1 - This is an IP Address resource. If there were other IP Address resources in Service foo , they would stop in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . fs:1 - This is a File System resource. If there were other File System resources in Service foo , they would stop in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . lvm:2 - This is an LVM resource. All LVM resources are stopped last. lvm:2 ( <lvm name="2" .../> ) is stopped before lvm:1 ; resources within a group of a resource type are stopped in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . lvm:1 - This is an LVM resource. All LVM resources are stopped last. lvm:1 ( <lvm name="1" .../> ) is stopped after lvm:2 ; resources within a group of a resource type are stopped in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf .
[ "<service name=\"foo\"> <script name=\"1\" .../> <nontypedresource name=\"foo\"/> <lvm name=\"1\" .../> <nontypedresourcetwo name=\"bar\"/> <ip address=\"10.1.1.1\" .../> <fs name=\"1\" .../> <lvm name=\"2\" .../> </service>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-clust-rsc-non-typed-resources-ca
Chapter 5. Using the client registration service
Chapter 5. Using the client registration service In order for an application or service to utilize Red Hat build of Keycloak it has to register a client in Red Hat build of Keycloak. An admin can do this through the admin console (or admin REST endpoints), but clients can also register themselves through the Red Hat build of Keycloak client registration service. The Client Registration Service provides built-in support for Red Hat build of Keycloak Client Representations, OpenID Connect Client Meta Data and SAML Entity Descriptors. The Client Registration Service endpoint is /realms/<realm>/clients-registrations/<provider> . The built-in supported providers are: default - Red Hat build of Keycloak Client Representation (JSON) install - Red Hat build of Keycloak Adapter Configuration (JSON) openid-connect - OpenID Connect Client Metadata Description (JSON) saml2-entity-descriptor - SAML Entity Descriptor (XML) The following sections will describe how to use the different providers. 5.1. Authentication To invoke the Client Registration Services you usually need a token. The token can be a bearer token, an initial access token or a registration access token. There is an alternative to register new client without any token as well, but then you need to configure Client Registration Policies (see below). 5.1.1. Bearer token The bearer token can be issued on behalf of a user or a Service Account. The following permissions are required to invoke the endpoints (see Server Administration Guide for more details): create-client or manage-client - To create clients view-client or manage-client - To view clients manage-client - To update or delete client If you are using a bearer token to create clients it's recommend to use a token from a Service Account with only the create-client role (see Server Administration Guide for more details). 5.1.2. Initial Access Token The recommended approach to registering new clients is by using initial access tokens. An initial access token can only be used to create clients and has a configurable expiration as well as a configurable limit on how many clients can be created. An initial access token can be created through the admin console. To create a new initial access token first select the realm in the admin console, then click on Client in the menu on the left, followed by Initial access token in the tabs displayed in the page. You will now be able to see any existing initial access tokens. If you have access you can delete tokens that are no longer required. You can only retrieve the value of the token when you are creating it. To create a new token click on Create . You can now optionally add how long the token should be valid, also how many clients can be created using the token. After you click on Save the token value is displayed. It is important that you copy/paste this token now as you won't be able to retrieve it later. If you forget to copy/paste it, then delete the token and create another one. The token value is used as a standard bearer token when invoking the Client Registration Services, by adding it to the Authorization header in the request. For example: 5.1.3. Registration Access Token When you create a client through the Client Registration Service the response will include a registration access token. The registration access token provides access to retrieve the client configuration later, but also to update or delete the client. The registration access token is included with the request in the same way as a bearer token or initial access token. By default, registration access token rotation is enabled. This means a registration access token is only valid once. When the token is used, the response will include a new token. Note that registration access token rotation can be disabled by using Client Policies . If a client was created outside of the Client Registration Service it won't have a registration access token associated with it. You can create one through the admin console. This can also be useful if you lose the token for a particular client. To create a new token find the client in the admin console and click on Credentials . Then click on Generate registration access token . 5.2. Red Hat build of Keycloak Representations The default client registration provider can be used to create, retrieve, update and delete a client. It uses Red Hat build of Keycloak Client Representation format which provides support for configuring clients exactly as they can be configured through the admin console, including for example configuring protocol mappers. To create a client create a Client Representation (JSON) then perform an HTTP POST request to /realms/<realm>/clients-registrations/default . It will return a Client Representation that also includes the registration access token. You should save the registration access token somewhere if you want to retrieve the config, update or delete the client later. To retrieve the Client Representation perform an HTTP GET request to /realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To update the Client Representation perform an HTTP PUT request with the updated Client Representation to: /realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To delete the Client Representation perform an HTTP DELETE request to: /realms/<realm>/clients-registrations/default/<client id> 5.3. Red Hat build of Keycloak adapter configuration The installation client registration provider can be used to retrieve the adapter configuration for a client. In addition to token authentication you can also authenticate with client credentials using HTTP basic authentication. To do this include the following header in the request: To retrieve the Adapter Configuration then perform an HTTP GET request to /realms/<realm>/clients-registrations/install/<client id> . No authentication is required for public clients. This means that for the JavaScript adapter you can load the client configuration directly from Red Hat build of Keycloak using the above URL. 5.4. OpenID Connect Dynamic Client Registration Red Hat build of Keycloak implements OpenID Connect Dynamic Client Registration , which extends OAuth 2.0 Dynamic Client Registration Protocol and OAuth 2.0 Dynamic Client Registration Management Protocol . The endpoint to use these specifications to register clients in Red Hat build of Keycloak is /realms/<realm>/clients-registrations/openid-connect[/<client id>] . This endpoint can also be found in the OpenID Connect Discovery endpoint for the realm, /realms/<realm>/.well-known/openid-configuration . 5.5. SAML Entity Descriptors The SAML Entity Descriptor endpoint only supports using SAML v2 Entity Descriptors to create clients. It doesn't support retrieving, updating or deleting clients. For those operations the Red Hat build of Keycloak representation endpoints should be used. When creating a client a Red Hat build of Keycloak Client Representation is returned with details about the created client, including a registration access token. To create a client perform an HTTP POST request with the SAML Entity Descriptor to /realms/<realm>/clients-registrations/saml2-entity-descriptor . 5.6. Example using CURL The following example creates a client with the clientId myclient using CURL. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. curl -X POST \ -d '{ "clientId": "myclient" }' \ -H "Content-Type:application/json" \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ http://localhost:8080/realms/master/clients-registrations/default 5.7. Example using Java Client Registration API The Client Registration Java API makes it easy to use the Client Registration Service using Java. To use include the dependency org.keycloak:keycloak-client-registration-api:>VERSION< from Maven. For full instructions on using the Client Registration refer to the JavaDocs. Below is an example of creating a client. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. String token = "eyJhbGciOiJSUz..."; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url("http://localhost:8080", "myrealm") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken(); 5.8. Client Registration Policies Note The current plans are for the Client Registration Policies to be removed in favor of the Client Policies described in the Server Administration Guide . Client Policies are more flexible and support more use cases. Red Hat build of Keycloak currently supports two ways how new clients can be registered through Client Registration Service. Authenticated requests - Request to register new client must contain either Initial Access Token or Bearer Token as mentioned above. Anonymous requests - Request to register new client doesn't need to contain any token at all Anonymous client registration requests are very interesting and powerful feature, however you usually don't want that anyone is able to register new client without any limitations. Hence we have Client Registration Policy SPI , which provide a way to limit who can register new clients and under which conditions. In Red Hat build of Keycloak admin console, you can click to Client Registration tab and then Client Registration Policies sub-tab. Here you will see what policies are configured by default for anonymous requests and what policies are configured for authenticated requests. Note The anonymous requests (requests without any token) are allowed just for creating (registration) of new clients. So when you register new client through anonymous request, the response will contain Registration Access Token, which must be used for Read, Update or Delete request of particular client. However using this Registration Access Token from anonymous registration will be then subject to Anonymous Policy too! This means that for example request for update client also needs to come from Trusted Host if you have Trusted Hosts policy. Also for example it won't be allowed to disable Consent Required when updating client and when Consent Required policy is present etc. Currently we have these policy implementations: Trusted Hosts Policy - You can configure list of trusted hosts and trusted domains. Request to Client Registration Service can be sent just from those hosts or domains. Request sent from some untrusted IP will be rejected. URLs of newly registered client must also use just those trusted hosts or domains. For example it won't be allowed to set Redirect URI of client pointing to some untrusted host. By default, there is not any whitelisted host, so anonymous client registration is de-facto disabled. Consent Required Policy - Newly registered clients will have Consent Allowed switch enabled. So after successful authentication, user will always see consent screen when he needs to approve permissions (client scopes). It means that client won't have access to any personal info or permission of user unless user approves it. Protocol Mappers Policy - Allows to configure list of whitelisted protocol mapper implementations. New client can't be registered or updated if it contains some non-whitelisted protocol mapper. Note that this policy is used for authenticated requests as well, so even for authenticated request there are some limitations which protocol mappers can be used. Client Scope Policy - Allow to whitelist Client Scopes , which can be used with newly registered or updated clients. There are no whitelisted scopes by default; only the client scopes, which are defined as Realm Default Client Scopes are whitelisted by default. Full Scope Policy - Newly registered clients will have Full Scope Allowed switch disabled. This means they won't have any scoped realm roles or client roles of other clients. Max Clients Policy - Rejects registration if current number of clients in the realm is same or bigger than specified limit. It's 200 by default for anonymous registrations. Client Disabled Policy - Newly registered client will be disabled. This means that admin needs to manually approve and enable all newly registered clients. This policy is not used by default even for anonymous registration.
[ "Authorization: bearer eyJhbGciOiJSUz", "Authorization: basic BASE64(client-id + ':' + client-secret)", "curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/realms/master/clients-registrations/default", "String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/securing_applications_and_services_guide/client_registration
Chapter 10. Hardware Enablement
Chapter 10. Hardware Enablement Support added for the CAPI flash block adapter The Coherent Accelerator Processor Interface (CAPI) is a technology that enables I/O adapters to coherently access host memory, and thus ensures improved performance. This update adds the cxlflash driver, which provides support for IBM's CAPI flash block adapter. (BZ#1182021) MMC kernel rebased to version 4.5 With this update, the Multimedia Card (MMC) kernel subsystem has been upgraded to upstream version 4.5, which fixes multiple bugs and also enables the Red Hat Enterprise Linux 7 kernel to use the embedded MMC (eMMC) interface version 5.0. In addition, the update improves the suspend and resume functionality of MMC devices, as well as their general stability. (BZ#1297039) iWarp mapper service added This update adds support for the internet Wide Area RDMA Protocol (iWARP) mapper to Red Hat Enterprise Linux 7. The iWARP mapper is a user-space service that enables the following iWARP drivers to claim TCP ports using the standard socket interface: Intel i40iw NES Chelsio cxgb4 Note that both the iw_cm and ib_core kernel modules need to be loaded for the iWarp mapper service (iwpmd) to start successfully. (BZ#1331651) New package: memkind This update adds the memkind package, which provides a user-extensible heap manager library, built as an extension of the jemalloc memory allocator. This library enables partitioning of the memory heap located between memory types that are defined when the operating system policies are applied to virtual address ranges. In addition, memkind enables the user to control memory partition features and allocate memory with a specified set of memory features selected. (BZ#1210910) Per-port MSI-X support for the AHCI driver The driver for the Advanced Host Controlled Interface (AHCI) has been updated for per-port message-signaled interrupt (MSI-X) vectors. Note that this applies only to controllers that support the feature. (BZ#1286946) Runtime Instrumentation for IBM z Systems is now fully supported The Runtime Instrumentation feature, previously available as a Technology Preview, is now fully supported in Red Hat Enterprise Linux 7 on IBM z Systems. Runtime Instrumentation enables advanced analysis and execution for a number of user-space applications available with the IBM zEnterprise EC12 system. (BZ#1115947)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_hardware_enablement
1.4. Installing Supporting Components on Client Machines
1.4. Installing Supporting Components on Client Machines 1.4.1. Installing Console Components A console is a graphical window that allows you to view the start up screen, shut down screen, and desktop of a virtual machine, and to interact with that virtual machine in a similar way to a physical machine. In Red Hat Virtualization, the default application for opening a console to a virtual machine is Remote Viewer, which must be installed on the client machine prior to use. 1.4.1.1. Installing Remote Viewer on Red Hat Enterprise Linux The Remote Viewer application provides users with a graphical console for connecting to virtual machines. Once installed, it is called automatically when attempting to open a SPICE session with a virtual machine. Alternatively, it can also be used as a standalone application. Remote Viewer is included in the virt-viewer package provided by the base Red Hat Enterprise Linux Workstation and Red Hat Enterprise Linux Server repositories. Installing Remote Viewer on Linux Install the virt-viewer package: Restart your browser for the changes to take effect. You can now connect to your virtual machines using either the SPICE protocol or the VNC protocol. 1.4.1.2. Installing Remote Viewer on Windows The Remote Viewer application provides users with a graphical console for connecting to virtual machines. Once installed, it is called automatically when attempting to open a SPICE session with a virtual machine. Alternatively, it can also be used as a standalone application. Installing Remote Viewer on Windows Open a web browser and download one of the following installers according to the architecture of your system. Virt Viewer for 32-bit Windows: Virt Viewer for 64-bit Windows: Open the folder where the file was saved. Double-click the file. Click Run if prompted by a security warning. Click Yes if prompted by User Account Control. Remote Viewer is installed and can be accessed via Remote Viewer in the VirtViewer folder of All Programs in the start menu. 1.4.2. Installing usbdk on Windows usbdk is a driver that enables remote-viewer exclusive access to USB devices on Windows operating systems. Installing usbdk requires Administrator privileges. Note that the previously supported USB Clerk option has been deprecated and is no longer supported. Installing usbdk on Windows Open a web browser and download one of the following installers according to the architecture of your system. usbdk for 32-bit Windows: usbdk for 64-bit Windows: Open the folder where the file was saved. Double-click the file. Click Run if prompted by a security warning. Click Yes if prompted by User Account Control.
[ "yum install virt-viewer", "https:// your-manager-fqdn /ovirt-engine/services/files/spice/virt-viewer-x86.msi", "https:// your-manager-fqdn /ovirt-engine/services/files/spice/virt-viewer-x64.msi", "https:// [your manager's address] /ovirt-engine/services/files/spice/usbdk-x86.msi", "https:// [your manager's address] /ovirt-engine/services/files/spice/usbdk-x64.msi" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Installing_Supporting_Components
Installing on any platform
Installing on any platform OpenShift Container Platform 4.14 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_any_platform/index
Chapter 1. Red Hat OpenStack Services on OpenShift 18.0 adoption overview
Chapter 1. Red Hat OpenStack Services on OpenShift 18.0 adoption overview Adoption is the process of migrating a Red Hat OpenStack Platform (RHOSP) 17.1 overcloud to a Red Hat OpenStack Services on OpenShift 18.0 data plane. To ensure that you understand the entire adoption process and how to sufficiently prepare your RHOSP environment, review the prerequisites, adoption process, and post-adoption tasks. Important It is important to read the whole adoption guide before you start the adoption. You should form an understanding of the procedure, prepare the necessary configuration snippets for each service ahead of time, and test the procedure in a representative test environment before you adopt your main environment. 1.1. Adoption limitations Before you proceed with the adoption, check which features are considered a Technology Preview or are unsupported. Technology Preview The following features are considered a Technology Preview and have not been tested within the context of the Red Hat OpenStack Services on OpenShift adoption: Bare Metal Provisioning service (ironic) NFS Ganesha back end for Shared File Systems service (manila) iSCSI, NFS, and FC-based drivers for Block Storage service (cinder) The following Compute service (nova) features: Compute hosts with /var/lib/nova/instances on NFS NUMA aware vswitches PCI passthrough by flavor SR-IOV trusted virtual functions RX and TX queue sizes vGPU Virtio multiqueue Emulated virtual Trusted Platform Module (vTPM) UEFI AMD SEV Direct download from Rados Block Device (RBD) File-backed memory Provider.yaml Unsupported features The adoption process does not support the following features: Red Hat OpenStack Platform (RHOSP) 17.1 multi-cell deployments instanceHA DCN Designate Loadbalancer service (octavia) BGP IPv6 NFS back end for ephemeral Compute service virtual machine instances storage Adopting a FIPS environment The Key Manager service only supports the simple crypto plug-in The Block Storage service only supports RBD back-end adoption 1.2. Known issues Review the following known issues that might affect a successful adoption: Adoption of combined Controller/Networker nodes not verified Red Hat has not verified a process for adoption of a Red Hat OpenStack Platform (RHOSP) 17.1 environment where Controller and Networker roles are composed together on Controller nodes. If your RHOSP 17.1 environment does use combined Controller/Networker roles on the Controller nodes, the documented adoption process will not produce the expected results. Adoption of RHOSP 17.1 environments that use dedicated Networker nodes has been verified to work as documented. 1.3. Adoption prerequisites Before you begin the adoption procedure, complete the following prerequisites: Planning information Review the Adoption limitations . Review the Red Hat OpenShift Container Platform (RHOCP) requirements, data plane node requirements, Compute node requirements, and so on. For more information, see Planning your deployment . Review the adoption-specific networking requirements. For more information, see Configuring the network for the RHOSO deployment . Review the adoption-specific storage requirements. For more information, see Storage requirements . Review how to customize your deployed control plane with the services that are required for your environment. For more information, see Customizing the Red Hat OpenStack Services on OpenShift deployment . Familiarize yourself with the following RHOCP concepts that are used during adoption: Overview of nodes About node selectors Machine configuration overview Back-up information Back up your Red Hat OpenStack Platform (RHOSP) 17.1 environment by using one of the following options: The Relax-and-Recover tool. For more information, see Backing up the undercloud and the control plane nodes by using the Relax-and-Recover tool in Backing up and restoring the undercloud and control plane nodes . The Snapshot and Revert tool. For more information, see Backing up your Red Hat OpenStack Platform cluster by using the Snapshot and Revert tool in Backing up and restoring the undercloud and control plane nodes . A third-party backup and recovery tool. For more information about certified backup and recovery tools, see the Red Hat Ecosystem Catalog . Back up the configuration files from the RHOSP services and director on your file system. For more information, see Pulling the configuration from a director deployment . Compute Upgrade your Compute nodes to Red Hat Enterprise Linux 9.2. For more information, see Upgrading all Compute nodes to RHEL 9.2 in Framework for upgrades (16.2 to 17.1) . Perform a minor update to the latest RHOSP version. For more information, see Performing a minor update of Red Hat OpenStack Platform . If the systemd-container package is not installed on your Compute hosts, install it by using the following command: USD sudo dnf -y install systemd-container Reboot all hypervisors one by one to activate the systemd-container package. To avoid interrupting your workloads during the reboot, live migrate virtual machine instances before rebooting a node. For more information, see Rebooting Compute nodes in Performing a minor update of Red Hat OpenStack Platform . ML2/OVS If you use the Modular Layer 2 plug-in with Open vSwitch mechanism driver (ML2/OVS), migrate it to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver. For more information, see Migrating to the OVN mechanism driver . Tools Install the oc command line tool on your workstation. Install the podman command line tool on your workstation. RHOSP 17.1 release The RHOSP 17.1 cloud is updated to the latest minor version of the 17.1 release. RHOSP 17.1 hosts All control plane and data plane hosts of the RHOSP 17.1 cloud are up and running, and continue to run throughout the adoption procedure. 1.4. Guidelines for planning the adoption When planning to adopt a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 environment, consider the scope of the change. An adoption is similar in scope to a data center upgrade. Different firmware levels, hardware vendors, hardware profiles, networking interfaces, storage interfaces, and so on affect the adoption process and can cause changes in behavior during the adoption. Review the following guidelines to adequately plan for the adoption and increase the chance that you complete the adoption successfully: Important All commands in the adoption documentation are examples. Do not copy and paste the commands without understanding what the commands do. To minimize the risk of an adoption failure, reduce the number of environmental differences between the staging environment and the production sites. If the staging environment is not representative of the production sites, or a staging environment is not available, then you must plan to include contingency time in case the adoption fails. Review your custom Red Hat OpenStack Platform (RHOSP) service configuration at every major release. Every major release upgrades through multiple OpenStack releases. Each major release might deprecate configuration options or change the format of the configuration. Prepare a Method of Procedure (MOP) that is specific to your environment to reduce the risk of variance or omitted steps when running the adoption process. You can use representative hardware in a staging environment to prepare a MOP and validate any content changes. Include a cross-section of firmware versions, additional interface or device hardware, and any additional software in the representative staging environment to ensure that it is broadly representative of the variety that is present in the production environments. Ensure that you validate any Red Hat Enterprise Linux update or upgrade in the representative staging environment. Use Satellite for localized and version-pinned RPM content where your data plane nodes are located. In the production environment, use the content that you tested in the staging environment. 1.5. Adoption process overview Familiarize yourself with the steps of the adoption process and the optional post-adoption tasks. Main adoption process Migrate TLS everywhere (TLS-e) to the Red Hat OpenStack Services on OpenShift (RHOSO) deployment . Migrate your existing databases to the new control plane . Adopt your Red Hat OpenStack Platform 17.1 control plane services to the new RHOSO 18.0 deployment . Adopt the RHOSO 18.0 data plane . Migrate the Object Storage service (swift) to the RHOSO nodes . Migrate the Red Hat Ceph Storage cluster . Migrate the monitoring stack component to new nodes within an existing Red Hat Ceph Storage cluster . Migrate Red Hat Ceph Storage MDS to new nodes within the existing cluster . Migrate Red Hat Ceph Storage RGW to external RHEL nodes . Migrate Red Hat Ceph Storage RBD to external RHEL nodes . Post-adoption tasks Optional: Run tempest to verify that the entire adoption process is working properly. For more information, see Validating and troubleshooting the deployed cloud . Optional: Perform a minor update from RHEL 9.2 to 9.4. You can perform a minor update any time after you complete the adoption procedure. For more information, see Updating your environment to the latest maintenance release . Optional: Verify that you migrated all services from the Controller nodes, and then power off the nodes. If any services are still running in the Controller nodes, such as Open Virtual Networking (ML2/OVN), Object Storage service (swift), or Red Hat Ceph Storage, do not power off the nodes. 1.6. Identity service authentication If you have custom policies enabled, contact Red Hat Support before adopting a director OpenStack deployment. You must complete the following steps for adoption: Remove custom policies. Run the adoption. Re-add custom policies by using the new SRBAC syntax. After you adopt a director-based OpenStack deployment to a Red Hat OpenStack Services on OpenShift deployment, the Identity service performs user authentication and authorization by using Secure RBAC (SRBAC). If SRBAC is already enabled, then there is no change to how you perform operations. If SRBAC is disabled, then adopting a director-based OpenStack deployment might change how you perform operations due to changes in API access policies. For more information on SRBAC, see Secure role based access control in Red Hat OpenStack Services on OpenShift in Performing security operations . 1.7. Configuring the network for the Red Hat OpenStack Services on OpenShift deployment When you adopt a new Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must align the network configuration with the adopted cluster to maintain connectivity for existing workloads. Perform the following tasks to incorporate the existing network configuration: Configure Red Hat OpenShift Container Platform (RHOCP) worker nodes to align VLAN tags and IP Address Management (IPAM) configuration with the existing deployment. Configure control plane services to use compatible IP ranges for service and load-balancing IP addresses. Configure data plane nodes to use corresponding compatible configuration for VLAN tags and IPAM. When configuring nodes and services, the general approach is as follows: For IPAM, you can either reuse subnet ranges from the existing deployment or, if there is a shortage of free IP addresses in existing subnets, define new ranges for the new control plane services. If you define new ranges, you configure IP routing between the old and new ranges. For more information, see Planning your IPAM configuration . For VLAN tags, always reuse the configuration from the existing deployment. Note For more information about the network architecture and configuration, see Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift and About networking in Networking . 1.7.1. Retrieving the network configuration from your existing deployment You must determine which isolated networks are defined in your existing deployment. After you retrieve your network configuration, you have the following information: A list of isolated networks that are used in the existing deployment. For each of the isolated networks, the VLAN tag and IP ranges used for dynamic address allocation. A list of existing IP address allocations that are used in the environment. When reusing the existing subnet ranges to host the new control plane services, these addresses are excluded from the corresponding allocation pools. Procedure Find the network configuration in the network_data.yaml file. For example: Retrieve the VLAN tag that is used in the vlan key and the IP range in the ip_subnet key for each isolated network from the network_data.yaml file. When reusing subnet ranges from the existing deployment for the new control plane services, the ranges are split into separate pools for control plane services and load-balancer IP addresses. Use the tripleo-ansible-inventory.yaml file to determine the list of IP addresses that are already consumed in the adopted environment. For each listed host in the file, make a note of the IP and VIP addresses that are consumed by the node. For example: Note In this example, the 172.17.0.2 and 172.17.0.100 values are consumed and are not available for the new control plane services until the adoption is complete. Repeat this procedure for each isolated network and each host in the configuration. 1.7.2. Planning your IPAM configuration In a Red Hat OpenStack Services on OpenShift (RHOSO) deployment, each service that is deployed on the Red Hat OpenShift Container Platform (RHOCP) worker nodes requires an IP address from the IP Address Management (IPAM) pool. In a Red Hat OpenStack Platform (RHOSP) deployment, all services that are hosted on a Controller node share the same IP address. The RHOSO control plane has different requirements for the number of IP addresses that are made available for services. Depending on the size of the IP ranges that are used in the existing RHOSO deployment, you might reuse these ranges for the RHOSO control plane. The total number of IP addresses that are required for the new control plane services in each isolated network is calculated as the sum of the following: The number of RHOCP worker nodes. Each worker node requires 1 IP address in the NodeNetworkConfigurationPolicy custom resource (CR). The number of IP addresses required for the data plane nodes. Each node requires an IP address from the NetConfig CRs. The number of IP addresses required for control plane services. Each service requires an IP address from the NetworkAttachmentDefinition CRs. This number depends on the number of replicas for each service. The number of IP addresses required for load balancer IP addresses. Each service requires a Virtual IP address from the IPAddressPool CRs. For example, a simple single worker node RHOCP deployment with Red Hat OpenShift Local has the following IP ranges defined for the internalapi network: 1 IP address for the single worker node 1 IP address for the data plane node NetworkAttachmentDefinition CRs for control plane services: X.X.X.30-X.X.X.70 (41 addresses) IPAllocationPool CRs for load balancer IPs: X.X.X.80-X.X.X.90 (11 addresses) This example shows a total of 54 IP addresses allocated to the internalapi allocation pools. The requirements might differ depending on the list of RHOSP services to be deployed, their replica numbers, and the number of RHOCP worker nodes and data plane nodes. Additional IP addresses might be required in future RHOSP releases, so you must plan for some extra capacity for each of the allocation pools that are used in the new environment. After you determine the required IP pool size for the new deployment, you can choose to define new IP address ranges or reuse your existing IP address ranges. Regardless of the scenario, the VLAN tags in the existing deployment are reused in the new deployment. Ensure that the VLAN tags are properly retained in the new configuration. For more information, see Configuring isolated networks . 1.7.2.1. Configuring new subnet ranges You can define new IP ranges for control plane services that belong to a different subnet that is not used in the existing cluster. Then you configure link local IP routing between the existing and new subnets to enable existing and new service deployments to communicate. This involves using the director mechanism on a pre-adopted cluster to configure additional link local routes. This enables the data plane deployment to reach out to Red Hat OpenStack Platform (RHOSP) nodes by using the existing subnet addresses. You can use new subnet ranges with any existing subnet configuration, and when the existing cluster subnet ranges do not have enough free IP addresses for the new control plane services. You must size the new subnet appropriately to accommodate the new control plane services. There are no specific requirements for the existing deployment allocation pools that are already consumed by the RHOSP environment. Important Defining a new subnet for Storage and Storage management is not supported because Compute service (nova) and Red Hat Ceph Storage do not allow modifying those networks during adoption. In the following procedure, you configure NetworkAttachmentDefinition custom resources (CRs) to use a different subnet from what is configured in the network_config section of the OpenStackDataPlaneNodeSet CR for the same networks. The new range in the NetworkAttachmentDefinition CR is used for control plane services, while the existing range in the OpenStackDataPlaneNodeSet CR is used to manage IP Address Management (IPAM) for data plane nodes. The values that are used in the following procedure are examples. Use values that are specific to your configuration. Procedure Configure link local routes on the existing deployment nodes for the control plane subnets. This is done through director configuration: 1 The new control plane subnet. 2 The control plane IP address of the existing data plane node. Repeat this configuration for other networks that need to use different subnets for the new and existing parts of the deployment. Apply the new configuration to every RHOSP node: Optional: Include the --templates option to use your own templates instead of the default templates located in /usr/share/openstack-tripleo-heat-templates . Replace <templates_directory> with the path to the directory that contains your templates. Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. The cli-overcloud-node-network-config.yaml playbook uses the os-net-config tool to apply the network configuration on the deployed nodes. If you do not use --network-config to provide the network definitions, then you must configure the {{role.name}}NetworkConfigTemplate parameters in your network-environment.yaml file, otherwise the default network definitions are used. Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml . Replace <node_definition_file> with the name of your node definition file, for example, overcloud-baremetal-deploy.yaml . Ensure that the network_config_update variable is set to true in the node definition file. Note Network configuration changes are not applied by default to avoid the risk of network disruption. You must enforce the changes by setting the StandaloneNetworkConfigUpdate: true in the director configuration files. Confirm that there are new link local routes to the new subnet on each node. For example: You also must configure link local routes to existing deployment on Red Hat OpenStack Services on OpenShift (RHOSO) worker nodes. This is achieved by adding routes entries to the NodeNetworkConfigurationPolicy CRs for each network. For example: 1 The original subnet of the isolated network on the data plane. 2 The Red Hat OpenShift Container Platform (RHOCP) worker network interface that corresponds to the isolated network on the data plane. As a result, the following route is added to your RHOCP nodes: Later, during the data plane adoption, in the network_config section of the OpenStackDataPlaneNodeSet CR, add the same link local routes for the new control plane subnet ranges. For example: List the IP addresses that are used for the data plane nodes in the existing deployment as ansibleHost and fixedIP . For example: Important Do not change RHOSP node IP addresses during the adoption process. List previously used IP addresses in the fixedIP fields for each node entry in the nodes section of the OpenStackDataPlaneNodeSet CR. Expand the SSH range for the firewall configuration to include both subnets to allow SSH access to data plane nodes from both subnets: This provides SSH access from the new subnet to the RHOSP nodes as well as the RHOSP subnets. Set edpm_network_config_update: true to enforce the changes that you are applying to the nodes. 1.7.2.2. Reusing existing subnet ranges You can reuse existing subnet ranges if they have enough IP addresses to allocate to the new control plane services. You configure the new control plane services to use the same subnet as you used in the Red Hat OpenStack Platform (RHOSP) environment, and configure the allocation pools that are used by the new services to exclude IP addresses that are already allocated to existing cluster nodes. By reusing existing subnets, you avoid additional link local route configuration between the existing and new subnets. If your existing subnets do not have enough IP addresses in the existing subnet ranges for the new control plane services, you must create new subnet ranges. For more information, see Using new subnet ranges . No special routing configuration is required to reuse subnet ranges. However, you must ensure that the IP addresses that are consumed by RHOSP services do not overlap with the new allocation pools configured for Red Hat OpenStack Services on OpenShift control plane services. If you are especially constrained by the size of the existing subnet, you may have to apply elaborate exclusion rules when defining allocation pools for the new control plane services. For more information, see Configuring isolated networks . 1.7.3. Configuring isolated networks Before you begin replicating your existing VLAN and IPAM configuration in the Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must have the following IP address allocations for the new control plane services: 1 IP address for each isolated network on each Red Hat OpenShift Container Platform (RHOCP) worker node. You configure these IP addresses in the NodeNetworkConfigurationPolicy custom resources (CRs) for the RHOCP worker nodes. For more information, see Configuring RHOCP worker nodes . 1 IP range for each isolated network for the data plane nodes. You configure these ranges in the NetConfig CRs for the data plane nodes. For more information, see Configuring data plane nodes . 1 IP range for each isolated network for control plane services. These ranges enable pod connectivity for isolated networks in the NetworkAttachmentDefinition CRs. For more information, see Configuring the networking for control plane services . 1 IP range for each isolated network for load balancer IP addresses. These IP ranges define load balancer IP addresses for MetalLB in the IPAddressPool CRs. For more information, see Configuring the networking for control plane services . Note The exact list and configuration of isolated networks in the following procedures should reflect the actual Red Hat OpenStack Platform environment. The number of isolated networks might differ from the examples used in the procedures. The IPAM scheme might also differ. Only the parts of the configuration that are relevant to configuring networks are shown. The values that are used in the following procedures are examples. Use values that are specific to your configuration. 1.7.3.1. Configuring isolated networks on RHOCP worker nodes To connect service pods to isolated networks on Red Hat OpenShift Container Platform (RHOCP) worker nodes that run Red Hat OpenStack Platform services, physical network configuration on the hypervisor is required. This configuration is managed by the NMState operator, which uses NodeNetworkConfigurationPolicy custom resources (CRs) to define the desired network configuration for the nodes. Procedure For each RHOCP worker node, define a NodeNetworkConfigurationPolicy CR that describes the desired network configuration. For example: 1.7.3.2. Configuring isolated networks on control plane services After the NMState operator creates the desired hypervisor network configuration for isolated networks, you must configure the Red Hat OpenStack Platform (RHOSP) services to use the configured interfaces. You define a NetworkAttachmentDefinition custom resource (CR) for each isolated network. In some clusters, these CRs are managed by the Cluster Network Operator, in which case you use Network CRs instead. For more information, see Cluster Network Operator in Networking . Procedure Define a NetworkAttachmentDefinition CR for each isolated network. For example: Important Ensure that the interface name and IPAM range match the configuration that you used in the NodeNetworkConfigurationPolicy CRs. Optional: When reusing existing IP ranges, you can exclude part of the range that is used in the existing deployment by using the exclude parameter in the NetworkAttachmentDefinition pool. For example: 1 Defines the start of the IP range. 2 Defines the end of the IP range. 3 Excludes part of the IP range. This example excludes IP addresses 172.17.0.24/32 and 172.17.0.44/31 from the allocation pool. If your RHOSP services require load balancer IP addresses, define the pools for these services in an IPAddressPool CR. For example: Note The load balancer IP addresses belong to the same IP range as the control plane services, and are managed by MetalLB. This pool should also be aligned with the RHOSP configuration. Define IPAddressPool CRs for each isolated network that requires load balancer IP addresses. Optional: When reusing existing IP ranges, you can exclude part of the range by listing multiple entries in the addresses section of the IPAddressPool . For example: The example above would exclude the 172.17.0.65 address from the allocation pool. 1.7.3.3. Configuring isolated networks on data plane nodes Data plane nodes are configured by the OpenStack Operator and your OpenStackDataPlaneNodeSet custom resources (CRs). The OpenStackDataPlaneNodeSet CRs define your desired network configuration for the nodes. Your Red Hat OpenStack Services on OpenShift (RHOSO) network configuration should reflect the existing Red Hat OpenStack Platform (RHOSP) network setup. You must pull the network_data.yaml files from each RHOSP node and reuse them when you define the OpenStackDataPlaneNodeSet CRs. The format of the configuration does not change, so you can put network templates under edpm_network_config_template variables, either for all nodes or for each node. To ensure that the latest network configuration is used during the data plane adoption, you should also set edpm_network_config_update: true in the nodeTemplate field of the OpenStackDataPlaneNodeSet CR. Procedure Configure a NetConfig CR with your desired VLAN tags and IPAM configuration. For example: Optional: In the NetConfig CR, list multiple ranges for the allocationRanges field to exclude some of the IP addresses, for example, to accommodate IP addresses that are already consumed by the adopted environment: This example excludes the 172.17.0.200 address from the pool. 1.8. Storage requirements Storage in a Red Hat OpenStack Platform (RHOSP) deployment refers to the following types: The storage that is needed for the service to run The storage that the service manages Before you can deploy the services in Red Hat OpenStack Services on OpenShift (RHOSO), you must review the storage requirements, plan your Red Hat OpenShift Container Platform (RHOCP) node selection, prepare your RHOCP nodes, and so on. 1.8.1. Storage driver certification Before you adopt your Red Hat OpenStack Platform 17.1 deployment to a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment, confirm that your deployed storage drivers are certified for use with RHOSO 18.0. For information on software certified for use with RHOSO 18.0, see the Red Hat Ecosystem Catalog . 1.8.2. Block Storage service guidelines Prepare to adopt your Block Storage service (cinder): Take note of the Block Storage service back ends that you use. Determine all the transport protocols that the Block Storage service back ends use, such as RBD, iSCSI, FC, NFS, NVMe-TCP, and so on. You must consider them when you place the Block Storage services and when the right storage transport-related binaries are running on the Red Hat OpenShift Container Platform (RHOCP) nodes. For more information about each storage transport protocol, see RHOCP preparation for Block Storage service adoption . Use a Block Storage service volume service to deploy each Block Storage service volume back end. For example, you have an LVM back end, a Ceph back end, and two entries in cinderVolumes , and you cannot set global defaults for all volume services. You must define a service for each of them: apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: cinderVolumes: lvm: customServiceConfig: | [DEFAULT] debug = True [lvm] < . . . > ceph: customServiceConfig: | [DEFAULT] debug = True [ceph] < . . . > Warning Check that all configuration options are still valid for RHOSO 18.0 version. Configuration options might be deprecated, removed, or added. This applies to both back-end driver-specific configuration options and other generic options. 1.8.3. Limitations for adopting the Block Storage service Before you begin the Block Storage service (cinder) adoption, review the following limitations: There is no global nodeSelector option for all Block Storage service volumes. You must specify the nodeSelector for each back end. There are no global customServiceConfig or customServiceConfigSecrets options for all Block Storage service volumes. You must specify these options for each back end. Support for Block Storage service back ends that require kernel modules that are not included in Red Hat Enterprise Linux is not tested in Red Hat OpenStack Services on OpenShift (RHOSO). 1.8.4. RHOCP preparation for Block Storage service adoption Before you deploy Red Hat OpenStack Platform (RHOSP) in Red Hat OpenShift Container Platform (RHOCP) nodes, ensure that the networks are ready, that you decide which RHOCP nodes to restrict, and that you make any necessary changes to the RHOCP nodes. Node selection You might need to restrict the RHOCP nodes where the Block Storage service volume and backup services run. An example of when you need to restrict nodes for a specific Block Storage service is when you deploy the Block Storage service with the LVM driver. In that scenario, the LVM data where the volumes are stored only exists in a specific host, so you need to pin the Block Storage-volume service to that specific RHOCP node. Running the service on any other RHOCP node does not work. You cannot use the RHOCP host node name to restrict the LVM back end. You need to identify the LVM back end by using a unique label, an existing label, or a new label: apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: local-storage cinder: enabled: true template: cinderVolumes: lvm-iscsi: nodeSelector: lvm: cinder-volumes < . . . > For more information about node selection, see About node selectors . Note If your nodes do not have enough local disk space for temporary images, you can use a remote NFS location by setting the extra volumes feature, extraMounts . Transport protocols Some changes to the storage transport protocols might be required for RHOCP: If you use a MachineConfig to make changes to RHOCP nodes, the nodes reboot. Check the back-end sections that are listed in the enabled_backends configuration option in your cinder.conf file to determine the enabled storage back-end sections. Depending on the back end, you can find the transport protocol by viewing the volume_driver or target_protocol configuration options. The icssid service, multipathd service, and NVMe-TCP kernel modules start automatically on data plane nodes. NFS RHOCP connects to NFS back ends without additional changes. Rados Block Device and Red Hat Ceph Storage RHOCP connects to Red Hat Ceph Storage back ends without additional changes. You must provide credentials and configuration files to the services. iSCSI To connect to iSCSI volumes, the iSCSI initiator must run on the RHOCP hosts where the volume and backup services run. The Linux Open iSCSI initiator does not support network namespaces, so you must only run one instance of the service for the normal RHOCP usage, as well as the RHOCP CSI plugins and the RHOSP services. If you are not already running iscsid on the RHOCP nodes, then you must apply a MachineConfig . For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service If you use labels to restrict the nodes where the Block Storage services run, you must use a MachineConfigPool to limit the effects of the MachineConfig to the nodes where your services might run. For more information, see About node selectors . If you are using a single node deployment to test the process, replace worker with master in the MachineConfig . For production deployments that use iSCSI volumes, configure multipathing for better I/O. FC The Block Storage service volume and Block Storage service backup services must run in an RHOCP host that has host bus adapters (HBAs). If some nodes do not have HBAs, then use labels to restrict where these services run. For more information, see About node selectors . If you have virtualized RHOCP clusters that use FC you need to expose the host HBAs inside the virtual machine. For production deployments that use FC volumes, configure multipathing for better I/O. NVMe-TCP To connect to NVMe-TCP volumes, load NVMe-TCP kernel modules on the RHOCP hosts. If you do not already load the nvme-fabrics module on the RHOCP nodes where the volume and backup services are going to run, then you must apply a MachineConfig . For example: If you use labels to restrict the nodes where Block Storage services run, use a MachineConfigPool to limit the effects of the MachineConfig to the nodes where your services run. For more information, see About node selectors . If you use a single node deployment to test the process, replace worker with master in the MachineConfig . Only load the nvme-fabrics module because it loads the transport-specific modules, such as TCP, RDMA, or FC, as needed. For production deployments that use NVMe-TCP volumes, use multipathing for better I/O. For NVMe-TCP volumes, RHOCP uses native multipathing, called ANA. After the RHOCP nodes reboot and load the nvme-fabrics module, you can confirm that the operating system is configured and that it supports ANA by checking the host: Important ANA does not use the Linux Multipathing Device Mapper, but RHOCP requires multipathd to run on Compute nodes for the Compute service (nova) to be able to use multipathing. Multipathing is automatically configured on data plane nodes when they are provisioned. Multipathing Use multipathing for iSCSI and FC protocols. To configure multipathing on these protocols, you perform the following tasks: Prepare the RHOCP hosts Configure the Block Storage services Prepare the Compute service nodes Configure the Compute service To prepare the RHOCP hosts, ensure that the Linux Multipath Device Mapper is configured and running on the RHOCP hosts by using MachineConfig . For example: # Includes the /etc/multipathd.conf contents and the systemd unit changes apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false # Mode must be decimal, this is 0600 mode: 384 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service If you use labels to restrict the nodes where Block Storage services run, you need to use a MachineConfigPool to limit the effects of the MachineConfig to only the nodes where your services run. For more information, see About node selectors . If you are using a single node deployment to test the process, replace worker with master in the MachineConfig . Cinder volume and backup are configured by default to use multipathing. 1.8.5. Converting the Block Storage service configuration In your deployment, you use the same cinder.conf file for all the services. To prepare your Block Storage service (cinder) configuration for adoption, split this single-file configuration into individual configurations for each Block Storage service service. Review the following information to guide you in coverting your configuration: Determine what part of the configuration is generic for all the Block Storage services and remove anything that would change when deployed in Red Hat OpenShift Container Platform (RHOCP), such as the connection in the [database] section, the transport_url and log_dir in the [DEFAULT] sections, the whole [coordination] and [barbican] sections. The remaining generic configuration goes into the customServiceConfig option, or a Secret custom resource (CR) and is then used in the customServiceConfigSecrets section, at the cinder: template: level. Determine if there is a scheduler-specific configuration and add it to the customServiceConfig option in cinder: template: cinderScheduler . Determine if there is an API-specific configuration and add it to the customServiceConfig option in cinder: template: cinderAPI . If the Block Storage service backup is deployed, add the Block Storage service backup configuration options to customServiceConfig option, or to a Secret CR that you can add to customServiceConfigSecrets section at the cinder: template: cinderBackup: level. Remove the host configuration in the [DEFAULT] section to support multiple replicas later. Determine the individual volume back-end configuration for each of the drivers. The configuration is in the specific driver section, and it includes the [backend_defaults] section and FC zoning sections if you use them. The Block Storage service operator does not support a global customServiceConfig option for all volume services. Each back end has its own section under cinder: template: cinderVolumes , and the configuration goes in the customServiceConfig option or in a Secret CR and is then used in the customServiceConfigSecrets section. If any of the Block Storage service volume drivers require a custom vendor image, find the location of the image in the Red Hat Ecosystem Catalog , and create or modify an OpenStackVersion CR to specify the custom image by using the key from the cinderVolumes section. For example, if you have the following configuration: spec: cinder: enabled: true template: cinderVolume: pure: customServiceConfigSecrets: - openstack-cinder-pure-cfg < . . . > Then the OpenStackVersion CR that describes the container image for that back end looks like the following example: apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: pure: registry.connect.redhat.com/purestorage/openstack-cinder-volume-pure-rhosp-18-0' Note The name of the OpenStackVersion must match the name of your OpenStackControlPlane CR. If your Block Storage services use external files, for example, for a custom policy, or to store credentials or SSL certificate authority bundles to connect to a storage array, make those files available to the right containers. Use Secrets or ConfigMap to store the information in RHOCP and then in the extraMounts key. For example, for Red Hat Ceph Storage credentials that are stored in a Secret called ceph-conf-files , you patch the top-level extraMounts key in the OpenstackControlPlane CR: spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files For a service-specific file, such as the API policy, you add the configuration on the service itself. In the following example, you include the CinderAPI configuration that references the policy you are adding from a ConfigMap called my-cinder-conf that has a policy key with the contents of the policy: spec: cinder: enabled: true template: cinderAPI: customServiceConfig: | [oslo_policy] policy_file=/etc/cinder/api/policy.yaml extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/cinder/api name: policy readOnly: true propagation: - CinderAPI volumes: - name: policy projected: sources: - configMap: name: my-cinder-conf items: - key: policy path: policy.yaml 1.8.6. Changes to CephFS through NFS Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . Before you begin the adoption, review the following information to understand the changes to CephFS through NFS between Red Hat OpenStack Platform (RHOSP) 17.1 and Red Hat OpenStack Services on OpenShift (RHOSO) 18.0: If the RHOSP 17.1 deployment uses CephFS through NFS as a back end for Shared File Systems service (manila), you cannot directly import the ceph-nfs service on the RHOSP Controller nodes into RHOSO 18.0. In RHOSO 18.0, the Shared File Systems service only supports using a clustered NFS service that is directly managed on the Red Hat Ceph Storage cluster. Adoption with the ceph-nfs service involves a data path disruption to existing NFS clients. On RHOSP 17.1, Pacemaker controls the high availability of the ceph-nfs service. This service is assigned a Virtual IP (VIP) address that is also managed by Pacemaker. The VIP is typically created on an isolated StorageNFS network. The Controller nodes have ordering and collocation constraints established between this VIP, ceph-nfs , and the Shared File Systems service (manila) share manager service. Prior to adopting Shared File Systems service, you must adjust the Pacemaker ordering and collocation constraints to separate the share manager service. This establishes ceph-nfs with its VIP as an isolated, standalone NFS service that you can decommission after completing the RHOSO adoption. In Red Hat Ceph Storage 7, a native clustered Ceph NFS service has to be deployed on the Red Hat Ceph Storage cluster by using the Ceph Orchestrator prior to adopting the Shared File Systems service. This NFS service eventually replaces the standalone NFS service from RHOSP 17.1 in your deployment. When the Shared File Systems service is adopted into the RHOSO 18.0 environment, it establishes all the existing exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. After the service is decommissioned, you can re-mount the same share from the new clustered Ceph NFS service during a scheduled downtime. To ensure that NFS users are not required to make any networking changes to their existing workloads, assign an IP address from the same isolated StorageNFS network to the clustered Ceph NFS service. NFS users only need to discover and re-mount their shares by using new export paths. When the adoption is complete, RHOSO users can query the Shared File Systems service API to list the export locations on existing shares to identify the preferred paths to mount these shares. These preferred paths correspond to the new clustered Ceph NFS service in contrast to other non-preferred export paths that continue to be displayed until the old isolated, standalone NFS service is decommissioned. For more information on setting up a clustered NFS service, see Creating an NFS Ganesha cluster . 1.9. Red Hat Ceph Storage prerequisites Before you migrate your Red Hat Ceph Storage cluster daemons from your Controller nodes, complete the following tasks in your Red Hat OpenStack Platform 17.1 environment: Upgrade your Red Hat Ceph Storage cluster to release 7. For more information, see Upgrading Red Hat Ceph Storage 6 to 7 in Framework for upgrades (16.2 to 17.1) . Your Red Hat Ceph Storage 7 deployment is managed by cephadm . The undercloud is still available, and the nodes and networks are managed by director. If you use an externally deployed Red Hat Ceph Storage cluster, you must recreate a ceph-nfs cluster in the target nodes as well as propogate the StorageNFS network. Complete the prerequisites for your specific Red Hat Ceph Storage environment: Red Hat Ceph Storage with monitoring stack components Red Hat Ceph Storage RGW Red Hat Ceph Storage RBD NFS Ganesha 1.9.1. Completing prerequisites for a Red Hat Ceph Storage cluster with monitoring stack components Complete the following prerequisites before you migrate a Red Hat Ceph Storage cluster with monitoring stack components. Note In addition to updating the container images related to the monitoring stack, you must update the configuration entry related to the container_image_base . This has an impact on all the Red Hat Ceph Storage daemons that rely on the undercloud images. New daemons are deployed by using the new image registry location that is configured in the Red Hat Ceph Storage cluster. Procedure Gather the current status of the monitoring stack. Verify that the hosts have no monitoring label, or grafana , prometheus , or alertmanager , in cases of a per daemons placement evaluation: Note The entire relocation process is driven by cephadm and relies on labels to be assigned to the target nodes, where the daemons are scheduled. For more information about assigning labels to nodes, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . Confirm that the cluster is healthy and that both ceph orch ls and ceph orch ps return the expected number of deployed daemons. Review and update the container image registry: Note If you run the Red Hat Ceph Storage externalization procedure after you migrate the Red Hat OpenStack Platform control plane, update the container images in the Red Hat Ceph Storage cluster configuration. The current container images point to the undercloud registry, which might not be available anymore. Because the undercloud is not available after adoption is complete, replace the undercloud-provided images with an alternative registry. Remove the undercloud container images: 1.9.2. Completing prerequisites for Red Hat Ceph Storage RGW migration Complete the following prerequisites before you begin the Ceph Object Gateway (RGW) migration. Procedure Check the current status of the Red Hat Ceph Storage nodes: Log in to controller-0 and check the Pacemaker status to identify important information for the RGW migration: Identify the ranges of the storage networks. The following is an example and the values might differ in your environment: 1 br-ex represents the External Network, where in the current environment, HAProxy has the front-end Virtual IP (VIP) assigned. 2 vlan30 represents the Storage Network, where the new RGW instances should be started on the Red Hat Ceph Storage nodes. Identify the network that you previously had in HAProxy and propagate it through director to the Red Hat Ceph Storage nodes. Use this network to reserve a new VIP that is owned by Red Hat Ceph Storage as the entry point for the RGW service. Log in to controller-0 and find the ceph_rgw section in the current HAProxy configuration: Confirm that the network is used as an HAProxy front end. The following example shows that controller-0 exposes the services by using the external network, which is absent from the Red Hat Ceph Storage nodes. You must propagate the external network through director: Note If the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks. Propagate the HAProxy front-end network to Red Hat Ceph Storage nodes. In the NIC template that you use to define the ceph-storage network interfaces, add the new config section in the Red Hat Ceph Storage network configuration template file, for example, /home/stack/composable_roles/network/nic-configs/ceph-storage.j2 : --- network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} - type: vlan vlan_id: {{ storage_mgmt_vlan_id }} device: nic1 addresses: - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }} routes: {{ storage_mgmt_host_routes }} - type: interface name: nic2 use_dhcp: false defroute: false - type: vlan vlan_id: {{ storage_vlan_id }} device: nic2 addresses: - ip_netmask: {{ storage_ip }}/{{ storage_cidr }} routes: {{ storage_host_routes }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} use_dhcp: false addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ external_host_routes }} members: [] - type: interface name: nic3 primary: true Add the External Network to the bare metal file, for example, /home/stack/composable_roles/network/baremetal_deployment.yaml that is used by metalsmith : Note Ensure that network_config_update is enabled for network propagation to the target nodes when os-net-config is triggered. - name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/composable_roles/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: external Configure the new network on the bare metal nodes: Verify that the new network is configured on the Red Hat Ceph Storage nodes: 1.9.3. Completing prerequisites for a Red Hat Ceph Storage RBD migration Complete the following prerequisites before you begin the Red Hat Ceph Storage Rados Block Device (RBD) migration. The target CephStorage or ComputeHCI nodes are configured to have both storage and storage_mgmt networks. This ensures that you can use both Red Hat Ceph Storage public and cluster networks from the same node. From Red Hat OpenStack Platform 17.1 and later you do not have to run a stack update. NFS Ganesha is migrated from a director deployment to cephadm . For more information, see Creating an NFS Ganesha cluster . Ceph Metadata Server, monitoring stack, Ceph Object Gateway, and any other daemon that is deployed on Controller nodes. The daemons distribution follows the cardinality constraints that are described in Red Hat Ceph Storage: Supported configurations . The Red Hat Ceph Storage cluster is healthy, and the ceph -s command returns HEALTH_OK . Run os-net-config on the bare metal node and configure additional networks: If target nodes are CephStorage , ensure that the network is defined in the bare metal file for the CephStorage nodes, for example, /home/stack/composable_roles/network/baremetal_deployment.yaml : - name: CephStorage count: 2 instances: - hostname: oc0-ceph-0 name: oc0-ceph-0 - hostname: oc0-ceph-1 name: oc0-ceph-1 defaults: networks: - network: ctlplane vif: true - network: storage_cloud_0 subnet: storage_cloud_0_subnet - network: storage_mgmt_cloud_0 subnet: storage_mgmt_cloud_0_subnet network_config: template: templates/single_nic_vlans/single_nic_vlans_storage.j2 Add the missing network: Verify that the storage network is configured on the target nodes: 1.9.4. Creating an NFS Ganesha cluster Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . If you use CephFS through NFS with the Shared File Systems service (manila), you must create a new clustered NFS service on the Red Hat Ceph Storage cluster. This service replaces the standalone, Pacemaker-controlled ceph-nfs service that you use in Red Hat OpenStack Platform (RHOSP) 17.1. Procedure Identify the Red Hat Ceph Storage nodes to deploy the new clustered NFS service, for example, cephstorage-0 , cephstorage-1 , cephstorage-2 . Note You must deploy this service on the StorageNFS isolated network so that you can mount your existing shares through the new NFS export locations. You can deploy the new clustered NFS service on your existing CephStorage nodes or HCI nodes, or on new hardware that you enrolled in the Red Hat Ceph Storage cluster. If you deployed your Red Hat Ceph Storage nodes with director, propagate the StorageNFS network to the target nodes where the ceph-nfs service is deployed. Note If the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks. Identify the node definition file, overcloud-baremetal-deploy.yaml , that is used in the RHOSP environment. For more information about identifying the overcloud-baremetal-deploy.yaml file, see Customizing overcloud networks in Customizing the Red Hat OpenStack Services on OpenShift deployment . Edit the networks that are associated with the Red Hat Ceph Storage nodes to include the StorageNFS network: - name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: storage_nfs Edit the network configuration template file, for example, /home/stack/network/nic-configs/ceph-storage.j2 , for the Red Hat Ceph Storage nodes to include an interface that connects to the StorageNFS network: - type: vlan device: nic2 vlan_id: {{ storage_nfs_vlan_id }} addresses: - ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }} routes: {{ storage_nfs_host_routes }} Update the Red Hat Ceph Storage nodes: When the update is complete, ensure that a new interface is created in theRed Hat Ceph Storage nodes and that they are tagged with the VLAN that is associated with StorageNFS . Identify the IP address from the StorageNFS network to use as the Virtual IP address (VIP) for the Ceph NFS service: In a running cephadm shell, identify the hosts for the NFS service: Label each host that you identified. Repeat this command for each host that you want to label: Replace <hostname> with the name of the host that you identified. Create the NFS cluster: Replace <VIP> with the VIP for the Ceph NFS service. Note You must set the ingress-mode argument to haproxy-protocol . No other ingress-mode is supported. This ingress mode allows you to enforce client restrictions through the Shared File Systems service. For more information on deploying the clustered Ceph NFS service, see the Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability) in Red Hat Ceph Storage 7 Operations Guide . Check the status of the NFS cluster: 1.10. Comparing configuration files between deployments To help you manage the configuration for your director and Red Hat OpenStack Platform (RHOSP) services, you can compare the configuration files between your director deployment and the Red Hat OpenStack Services on OpenShift (RHOSO) cloud by using the os-diff tool. Prerequisites Golang is installed and configured on your environment: Procedure Configure the /etc/os-diff/os-diff.cfg file and the /etc/os-diff/ssh.config file according to your environment. To allow os-diff to connect to your clouds and pull files from the services that you describe in the config.yaml file, you must set the following options in the os-diff.cfg file: [Default] local_config_dir=/tmp/ service_config_file=config.yaml [Tripleo] ssh_cmd=ssh -F ssh.config 1 director_host=standalone 2 container_engine=podman connection=ssh remote_config_path=/tmp/tripleo local_config_path=/tmp/ [Openshift] ocp_local_config_path=/tmp/ocp connection=local ssh_cmd="" 1 Instructs os-diff to access your director host through SSH. The default value is ssh -F ssh.config . However, you can set the value without an ssh.config file, for example, ssh -i /home/user/.ssh/id_rsa [email protected] . 2 The host to use to access your cloud, and the podman/docker binary is installed and allowed to interact with the running containers. You can leave this key blank. If you use a host file to connect to your cloud, configure the ssh.config file to allow os-diff to access your RHOSP environment, for example: Host * IdentitiesOnly yes Host virthost Hostname virthost IdentityFile ~/.ssh/id_rsa User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host standalone Hostname standalone IdentityFile <path to SSH key> User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host crc Hostname crc IdentityFile ~/.ssh/id_rsa User stack StrictHostKeyChecking no UserKnownHostsFile=/dev/null Replace <path to SSH key> with the path to your SSH key. You must provide a value for IdentityFile to get full working access to your RHOSP environment. If you use an inventory file to connect to your cloud, generate the ssh.config file from your Ansible inventory, for example, tripleo-ansible-inventory.yaml file: Verification Test your connection:
[ "sudo dnf -y install systemd-container", "- name: InternalApi mtu: 1500 vip: true vlan: 20 name_lower: internal_api dns_domain: internal.mydomain.tld. service_net_map_replace: internal subnets: internal_api_subnet: ip_subnet: '172.17.0.0/24' allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]", "Standalone: hosts: standalone: internal_api_ip: 172.17.0.100 standalone: children: Standalone: {} vars: internal_api_vip: 172.17.0.2", "network_config: - type: ovs_bridge name: br-ctlplane routes: - ip_netmask: 0.0.0.0/0 next_hop: 192.168.1.1 - ip_netmask: 172.31.0.0/24 1 next_hop: 192.168.1.100 2", "(undercloud)USD openstack overcloud network provision [--templates <templates_directory> \\] --output <deployment_file> /home/stack/templates/<networks_definition_file>", "(undercloud)USD openstack overcloud node provision [--templates <templates_directory> \\] --stack <stack> --network-config --output <deployment_file> /home/stack/templates/<node_definition_file>", "ip route | grep 172 172.31.0.0/24 via 192.168.122.100 dev br-ctlplane", "- destination: 192.168.122.0/24 1 next-hop-interface: ospbr 2", "ip route | grep 192 192.168.122.0/24 dev ospbr proto static scope link", "nodeTemplate: ansible: ansibleUser: root ansibleVars: additional_ctlplane_host_routes: - ip_netmask: 172.31.0.0/24 next_hop: '{{ ctlplane_ip }}' edpm_network_config_template: | network_config: - type: ovs_bridge routes: {{ ctlplane_host_routes + additional_ctlplane_host_routes }}", "nodes: standalone: ansible: ansibleHost: 192.168.122.100 ansibleUser: \"\" hostName: standalone networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1", "edpm_sshd_allowed_ranges: - 192.168.122.0/24 - 172.31.0.0/24", "apiVersion: v1 items: - apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: enp6s0.20 state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: enp6s0.21 state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: enp6s0.22 state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true nodeSelector: kubernetes.io/hostname: ocp-worker-0 node-role.kubernetes.io/worker: \"\"", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"internalapi\", \"type\": \"macvlan\", \"master\": \"enp6s0.20\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.17.0.0/24\", \"range_start\": \"172.17.0.20\", \"range_end\": \"172.17.0.50\" } }", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"internalapi\", \"type\": \"macvlan\", \"master\": \"enp6s0.20\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.17.0.0/24\", \"range_start\": \"172.17.0.20\", 1 \"range_end\": \"172.17.0.50\", 2 \"exclude\": [ 3 \"172.17.0.24/32\", \"172.17.0.44/31\" ] } }", "- apiVersion: metallb.io/v1beta1 kind: IPAddressPool spec: addresses: - 172.17.0.60-172.17.0.70", "- apiVersion: metallb.io/v1beta1 kind: IPAddressPool spec: addresses: - 172.17.0.60-172.17.0.64 - 172.17.0.66-172.17.0.70", "apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig spec: networks: - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 cidr: 172.17.0.0/24 vlan: 20 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22", "apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig spec: networks: - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.199 start: 172.17.0.100 - end: 172.17.0.250 start: 172.17.0.201 cidr: 172.17.0.0/24 vlan: 20", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: cinderVolumes: lvm: customServiceConfig: | [DEFAULT] debug = True [lvm] < . . . > ceph: customServiceConfig: | [DEFAULT] debug = True [ceph] < . . . >", "oc label nodes worker0 lvm=cinder-volumes", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: local-storage cinder: enabled: true template: cinderVolumes: lvm-iscsi: nodeSelector: lvm: cinder-volumes < . . . >", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false # Mode must be decimal, this is 0644 mode: 420 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,nvme-fabrics", "cat /sys/module/nvme_core/parameters/multipath", "Includes the /etc/multipathd.conf contents and the systemd unit changes apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false # Mode must be decimal, this is 0600 mode: 384 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service", "spec: cinder: enabled: true template: cinderVolume: pure: customServiceConfigSecrets: - openstack-cinder-pure-cfg < . . . >", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: pure: registry.connect.redhat.com/purestorage/openstack-cinder-volume-pure-rhosp-18-0'", "spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files", "spec: cinder: enabled: true template: cinderAPI: customServiceConfig: | [oslo_policy] policy_file=/etc/cinder/api/policy.yaml extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/cinder/api name: policy readOnly: true propagation: - CinderAPI volumes: - name: policy projected: sources: - configMap: name: my-cinder-conf items: - key: policy path: policy.yaml", "[tripleo-admin@controller-0 ~]USD sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS STATUS cephstorage-0.redhat.local 192.168.24.11 osd mds cephstorage-1.redhat.local 192.168.24.12 osd mds cephstorage-2.redhat.local 192.168.24.47 osd mds controller-0.redhat.local 192.168.24.35 _admin mon mgr controller-1.redhat.local 192.168.24.53 mon _admin mgr controller-2.redhat.local 192.168.24.10 mon _admin mgr 6 hosts in cluster", "ceph config dump mgr advanced mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.10 mgr advanced mgr/cephadm/container_image_base undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph mgr advanced mgr/cephadm/container_image_grafana undercloud-0.ctlplane.redhat.local:8787/rh-osbs/grafana:latest mgr advanced mgr/cephadm/container_image_node_exporter undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-node-exporter:v4.10 mgr advanced mgr/cephadm/container_image_prometheus undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus:v4.10", "cephadm shell -- ceph config rm mgr mgr/cephadm/container_image_base for i in prometheus grafana alertmanager node_exporter; do cephadm shell -- ceph config rm mgr mgr/cephadm/container_image_USDi done", "(undercloud) [stack@undercloud-0 ~]USD metalsmith list +------------------------+ +----------------+ | IP Addresses | | Hostname | +------------------------+ +----------------+ | ctlplane=192.168.24.25 | | cephstorage-0 | | ctlplane=192.168.24.10 | | cephstorage-1 | | ctlplane=192.168.24.32 | | cephstorage-2 | | ctlplane=192.168.24.28 | | compute-0 | | ctlplane=192.168.24.26 | | compute-1 | | ctlplane=192.168.24.43 | | controller-0 | | ctlplane=192.168.24.7 | | controller-1 | | ctlplane=192.168.24.41 | | controller-2 | +------------------------+ +----------------+", "Full List of Resources: * ip-192.168.24.46 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.103 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.129 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.68 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.37 (ocf:heartbeat:IPaddr2): Started controller-1 * Container bundle set: haproxy-bundle [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1", "[heat-admin@controller-0 ~]USD ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.45/24 brd 192.168.24.255 scope global enp1s0\\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.46/32 brd 192.168.24.255 scope global enp1s0\\ valid_lft forever preferred_lft forever 7: br-ex inet 10.0.0.122/24 brd 10.0.0.255 scope global br-ex\\ valid_lft forever preferred_lft forever 1 8: vlan70 inet 172.17.5.22/24 brd 172.17.5.255 scope global vlan70\\ valid_lft forever preferred_lft forever 8: vlan70 inet 172.17.5.94/32 brd 172.17.5.255 scope global vlan70\\ valid_lft forever preferred_lft forever 9: vlan50 inet 172.17.2.140/24 brd 172.17.2.255 scope global vlan50\\ valid_lft forever preferred_lft forever 10: vlan30 inet 172.17.3.73/24 brd 172.17.3.255 scope global vlan30\\ valid_lft forever preferred_lft forever 2 10: vlan30 inet 172.17.3.68/32 brd 172.17.3.255 scope global vlan30\\ valid_lft forever preferred_lft forever 11: vlan20 inet 172.17.1.88/24 brd 172.17.1.255 scope global vlan20\\ valid_lft forever preferred_lft forever 12: vlan40 inet 172.17.4.24/24 brd 172.17.4.255 scope global vlan40\\ valid_lft forever preferred_lft forever", "less /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen ceph_rgw bind 10.0.0.103:8080 transparent bind 172.17.3.68:8080 transparent mode http balance leastconn http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } http-request set-header X-Forwarded-Port %[dst_port] option httpchk GET /swift/healthcheck option httplog option forwardfor server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2 server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2 server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2", "[controller-0]USD ip -o -4 a 7: br-ex inet 10.0.0.106/24 brd 10.0.0.255 scope global br-ex\\ valid_lft forever preferred_lft forever", "--- network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} - type: vlan vlan_id: {{ storage_mgmt_vlan_id }} device: nic1 addresses: - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }} routes: {{ storage_mgmt_host_routes }} - type: interface name: nic2 use_dhcp: false defroute: false - type: vlan vlan_id: {{ storage_vlan_id }} device: nic2 addresses: - ip_netmask: {{ storage_ip }}/{{ storage_cidr }} routes: {{ storage_host_routes }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} use_dhcp: false addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ external_host_routes }} members: [] - type: interface name: nic3 primary: true", "- name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/composable_roles/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: external", "(undercloud) [stack@undercloud-0]USD openstack overcloud node provision -o overcloud-baremetal-deployed-0.yaml --stack overcloud --network-config -y USDPWD/composable_roles/network/baremetal_deployment.yaml", "ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.54/24 brd 192.168.24.255 scope global enp1s0\\ valid_lft forever preferred_lft forever 11: vlan40 inet 172.17.4.43/24 brd 172.17.4.255 scope global vlan40\\ valid_lft forever preferred_lft forever 12: vlan30 inet 172.17.3.23/24 brd 172.17.3.255 scope global vlan30\\ valid_lft forever preferred_lft forever 14: br-ex inet 10.0.0.133/24 brd 10.0.0.255 scope global br-ex\\ valid_lft forever preferred_lft forever", "- name: CephStorage count: 2 instances: - hostname: oc0-ceph-0 name: oc0-ceph-0 - hostname: oc0-ceph-1 name: oc0-ceph-1 defaults: networks: - network: ctlplane vif: true - network: storage_cloud_0 subnet: storage_cloud_0_subnet - network: storage_mgmt_cloud_0 subnet: storage_mgmt_cloud_0_subnet network_config: template: templates/single_nic_vlans/single_nic_vlans_storage.j2", "openstack overcloud node provision -o overcloud-baremetal-deployed-0.yaml --stack overcloud-0 /--network-config -y --concurrency 2 /home/stack/metalsmith-0.yaml", "(undercloud) [stack@undercloud ~]USD ssh [email protected] ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\\ valid_lft forever preferred_lft forever 5: br-storage inet 192.168.24.14/24 brd 192.168.24.255 scope global br-storage\\ valid_lft forever preferred_lft forever 6: vlan1 inet 192.168.24.14/24 brd 192.168.24.255 scope global vlan1\\ valid_lft forever preferred_lft forever 7: vlan11 inet 172.16.11.172/24 brd 172.16.11.255 scope global vlan11\\ valid_lft forever preferred_lft forever 8: vlan12 inet 172.16.12.46/24 brd 172.16.12.255 scope global vlan12\\ valid_lft forever preferred_lft forever", "- name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: storage_nfs", "- type: vlan device: nic2 vlan_id: {{ storage_nfs_vlan_id }} addresses: - ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }} routes: {{ storage_nfs_host_routes }}", "openstack overcloud node provision --stack overcloud --network-config -y -o overcloud-baremetal-deployed-storage_nfs.yaml --concurrency 2 /home/stack/network/baremetal_deployment.yaml", "openstack port list -c \"Fixed IP Addresses\" --network storage_nfs", "ceph orch host ls", "ceph orch host label add <hostname> nfs", "ceph nfs cluster create cephfs \"label:nfs\" --ingress --virtual-ip=<VIP> --ingress-mode=haproxy-protocol", "ceph nfs cluster ls ceph nfs cluster info cephfs", "dnf install -y golang-github-openstack-k8s-operators-os-diff", "[Default] local_config_dir=/tmp/ service_config_file=config.yaml [Tripleo] ssh_cmd=ssh -F ssh.config 1 director_host=standalone 2 container_engine=podman connection=ssh remote_config_path=/tmp/tripleo local_config_path=/tmp/ [Openshift] ocp_local_config_path=/tmp/ocp connection=local ssh_cmd=\"\"", "Host * IdentitiesOnly yes Host virthost Hostname virthost IdentityFile ~/.ssh/id_rsa User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host standalone Hostname standalone IdentityFile <path to SSH key> User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host crc Hostname crc IdentityFile ~/.ssh/id_rsa User stack StrictHostKeyChecking no UserKnownHostsFile=/dev/null", "os-diff configure -i tripleo-ansible-inventory.yaml -o ssh.config --yaml", "ssh -F ssh.config standalone" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/adopting_a_red_hat_openstack_platform_17.1_deployment/rhoso-180-adoption-overview_assembly
Chapter 5. Console [operator.openshift.io/v1]
Chapter 5. Console [operator.openshift.io/v1] Description Console provides a means to configure an operator to manage the console. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleSpec is the specification of the desired behavior of the Console. status object ConsoleStatus defines the observed status of the Console. 5.1.1. .spec Description ConsoleSpec is the specification of the desired behavior of the Console. Type object Property Type Description customization object customization is used to optionally provide a small set of customization options to the web console. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". plugins array (string) plugins defines a list of enabled console plugin names. providers object providers contains configuration for using specific service providers. route object route contains hostname and secret reference that contains the serving certificate. If a custom route is specified, a new route will be created with the provided hostname, under which console will be available. In case of custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. In case of custom hostname points to an arbitrary domain, manual DNS configurations steps are necessary. The default console route will be maintained to reserve the default hostname for console if the custom route is removed. If not specified, default route will be used. DEPRECATED unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 5.1.2. .spec.customization Description customization is used to optionally provide a small set of customization options to the web console. Type object Property Type Description addPage object addPage allows customizing actions on the Add page in developer perspective. brand string brand is the default branding of the web console which can be overridden by providing the brand field. There is a limited set of specific brand options. This field controls elements of the console such as the logo. Invalid value will prevent a console rollout. customLogoFile object customLogoFile replaces the default OpenShift logo in the masthead and about dialog. It is a reference to a ConfigMap in the openshift-config namespace. This can be created with a command like 'oc create configmap custom-logo --from-file=/path/to/file -n openshift-config'. Image size must be less than 1 MB due to constraints on the ConfigMap size. The ConfigMap key should include a file extension so that the console serves the file with the correct MIME type. Recommended logo specifications: Dimensions: Max height of 68px and max width of 200px SVG format preferred customProductName string customProductName is the name that will be displayed in page titles, logo alt text, and the about dialog instead of the normal OpenShift product name. developerCatalog object developerCatalog allows to configure the shown developer catalog categories (filters) and types (sub-catalogs). documentationBaseURL string documentationBaseURL links to external documentation are shown in various sections of the web console. Providing documentationBaseURL will override the default documentation URL. Invalid value will prevent a console rollout. perspectives array perspectives allows enabling/disabling of perspective(s) that user can see in the Perspective switcher dropdown. perspectives[] object Perspective defines a perspective that cluster admins want to show/hide in the perspective switcher dropdown projectAccess object projectAccess allows customizing the available list of ClusterRoles in the Developer perspective Project access page which can be used by a project admin to specify roles to other users and restrict access within the project. If set, the list will replace the default ClusterRole options. quickStarts object quickStarts allows customization of available ConsoleQuickStart resources in console. 5.1.3. .spec.customization.addPage Description addPage allows customizing actions on the Add page in developer perspective. Type object Property Type Description disabledActions array (string) disabledActions is a list of actions that are not shown to users. Each action in the list is represented by its ID. 5.1.4. .spec.customization.customLogoFile Description customLogoFile replaces the default OpenShift logo in the masthead and about dialog. It is a reference to a ConfigMap in the openshift-config namespace. This can be created with a command like 'oc create configmap custom-logo --from-file=/path/to/file -n openshift-config'. Image size must be less than 1 MB due to constraints on the ConfigMap size. The ConfigMap key should include a file extension so that the console serves the file with the correct MIME type. Recommended logo specifications: Dimensions: Max height of 68px and max width of 200px SVG format preferred Type object Property Type Description key string Key allows pointing to a specific key/value inside of the configmap. This is useful for logical file references. name string 5.1.5. .spec.customization.developerCatalog Description developerCatalog allows to configure the shown developer catalog categories (filters) and types (sub-catalogs). Type object Property Type Description categories array categories which are shown in the developer catalog. categories[] object DeveloperConsoleCatalogCategory for the developer console catalog. types object types allows enabling or disabling of sub-catalog types that user can see in the Developer catalog. When omitted, all the sub-catalog types will be shown. 5.1.6. .spec.customization.developerCatalog.categories Description categories which are shown in the developer catalog. Type array 5.1.7. .spec.customization.developerCatalog.categories[] Description DeveloperConsoleCatalogCategory for the developer console catalog. Type object Required id label Property Type Description id string ID is an identifier used in the URL to enable deep linking in console. ID is required and must have 1-32 URL safe (A-Z, a-z, 0-9, - and _) characters. label string label defines a category display label. It is required and must have 1-64 characters. subcategories array subcategories defines a list of child categories. subcategories[] object DeveloperConsoleCatalogCategoryMeta are the key identifiers of a developer catalog category. tags array (string) tags is a list of strings that will match the category. A selected category show all items which has at least one overlapping tag between category and item. 5.1.8. .spec.customization.developerCatalog.categories[].subcategories Description subcategories defines a list of child categories. Type array 5.1.9. .spec.customization.developerCatalog.categories[].subcategories[] Description DeveloperConsoleCatalogCategoryMeta are the key identifiers of a developer catalog category. Type object Required id label Property Type Description id string ID is an identifier used in the URL to enable deep linking in console. ID is required and must have 1-32 URL safe (A-Z, a-z, 0-9, - and _) characters. label string label defines a category display label. It is required and must have 1-64 characters. tags array (string) tags is a list of strings that will match the category. A selected category show all items which has at least one overlapping tag between category and item. 5.1.10. .spec.customization.developerCatalog.types Description types allows enabling or disabling of sub-catalog types that user can see in the Developer catalog. When omitted, all the sub-catalog types will be shown. Type object Required state Property Type Description disabled array (string) disabled is a list of developer catalog types (sub-catalogs IDs) that are not shown to users. Types (sub-catalogs) are added via console plugins, the available types (sub-catalog IDs) are available in the console on the cluster configuration page, or when editing the YAML in the console. Example: "Devfile", "HelmChart", "BuilderImage" If the list is empty or all the available sub-catalog types are added, then the complete developer catalog should be hidden. enabled array (string) enabled is a list of developer catalog types (sub-catalogs IDs) that will be shown to users. Types (sub-catalogs) are added via console plugins, the available types (sub-catalog IDs) are available in the console on the cluster configuration page, or when editing the YAML in the console. Example: "Devfile", "HelmChart", "BuilderImage" If the list is non-empty, a new type will not be shown to the user until it is added to list. If the list is empty the complete developer catalog will be shown. state string state defines if a list of catalog types should be enabled or disabled. 5.1.11. .spec.customization.perspectives Description perspectives allows enabling/disabling of perspective(s) that user can see in the Perspective switcher dropdown. Type array 5.1.12. .spec.customization.perspectives[] Description Perspective defines a perspective that cluster admins want to show/hide in the perspective switcher dropdown Type object Required id visibility Property Type Description id string id defines the id of the perspective. Example: "dev", "admin". The available perspective ids can be found in the code snippet section to the yaml editor. Incorrect or unknown ids will be ignored. pinnedResources array pinnedResources defines the list of default pinned resources that users will see on the perspective navigation if they have not customized these pinned resources themselves. The list of available Kubernetes resources could be read via kubectl api-resources . The console will also provide a configuration UI and a YAML snippet that will list the available resources that can be pinned to the navigation. Incorrect or unknown resources will be ignored. pinnedResources[] object PinnedResourceReference includes the group, version and type of resource visibility object visibility defines the state of perspective along with access review checks if needed for that perspective. 5.1.13. .spec.customization.perspectives[].pinnedResources Description pinnedResources defines the list of default pinned resources that users will see on the perspective navigation if they have not customized these pinned resources themselves. The list of available Kubernetes resources could be read via kubectl api-resources . The console will also provide a configuration UI and a YAML snippet that will list the available resources that can be pinned to the navigation. Incorrect or unknown resources will be ignored. Type array 5.1.14. .spec.customization.perspectives[].pinnedResources[] Description PinnedResourceReference includes the group, version and type of resource Type object Required group resource version Property Type Description group string group is the API Group of the Resource. Enter empty string for the core group. This value should consist of only lowercase alphanumeric characters, hyphens and periods. Example: "", "apps", "build.openshift.io", etc. resource string resource is the type that is being referenced. It is normally the plural form of the resource kind in lowercase. This value should consist of only lowercase alphanumeric characters and hyphens. Example: "deployments", "deploymentconfigs", "pods", etc. version string version is the API Version of the Resource. This value should consist of only lowercase alphanumeric characters. Example: "v1", "v1beta1", etc. 5.1.15. .spec.customization.perspectives[].visibility Description visibility defines the state of perspective along with access review checks if needed for that perspective. Type object Required state Property Type Description accessReview object accessReview defines required and missing access review checks. state string state defines the perspective is enabled or disabled or access review check is required. 5.1.16. .spec.customization.perspectives[].visibility.accessReview Description accessReview defines required and missing access review checks. Type object Property Type Description missing array missing defines a list of permission checks. The perspective will only be shown when at least one check fails. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the required access review list. missing[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface required array required defines a list of permission checks. The perspective will only be shown when all checks are successful. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the missing access review list. required[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 5.1.17. .spec.customization.perspectives[].visibility.accessReview.missing Description missing defines a list of permission checks. The perspective will only be shown when at least one check fails. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the required access review list. Type array 5.1.18. .spec.customization.perspectives[].visibility.accessReview.missing[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 5.1.19. .spec.customization.perspectives[].visibility.accessReview.required Description required defines a list of permission checks. The perspective will only be shown when all checks are successful. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the missing access review list. Type array 5.1.20. .spec.customization.perspectives[].visibility.accessReview.required[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 5.1.21. .spec.customization.projectAccess Description projectAccess allows customizing the available list of ClusterRoles in the Developer perspective Project access page which can be used by a project admin to specify roles to other users and restrict access within the project. If set, the list will replace the default ClusterRole options. Type object Property Type Description availableClusterRoles array (string) availableClusterRoles is the list of ClusterRole names that are assignable to users through the project access tab. 5.1.22. .spec.customization.quickStarts Description quickStarts allows customization of available ConsoleQuickStart resources in console. Type object Property Type Description disabled array (string) disabled is a list of ConsoleQuickStart resource names that are not shown to users. 5.1.23. .spec.providers Description providers contains configuration for using specific service providers. Type object Property Type Description statuspage object statuspage contains ID for statuspage.io page that provides status info about. 5.1.24. .spec.providers.statuspage Description statuspage contains ID for statuspage.io page that provides status info about. Type object Property Type Description pageID string pageID is the unique ID assigned by Statuspage for your page. This must be a public page. 5.1.25. .spec.route Description route contains hostname and secret reference that contains the serving certificate. If a custom route is specified, a new route will be created with the provided hostname, under which console will be available. In case of custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. In case of custom hostname points to an arbitrary domain, manual DNS configurations steps are necessary. The default console route will be maintained to reserve the default hostname for console if the custom route is removed. If not specified, default route will be used. DEPRECATED Type object Property Type Description hostname string hostname is the desired custom domain under which console will be available. secret object secret points to secret in the openshift-config namespace that contains custom certificate and key and needs to be created manually by the cluster admin. Referenced Secret is required to contain following key value pairs: - "tls.crt" - to specifies custom certificate - "tls.key" - to specifies private key of the custom certificate If the custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. 5.1.26. .spec.route.secret Description secret points to secret in the openshift-config namespace that contains custom certificate and key and needs to be created manually by the cluster admin. Referenced Secret is required to contain following key value pairs: - "tls.crt" - to specifies custom certificate - "tls.key" - to specifies private key of the custom certificate If the custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 5.1.27. .status Description ConsoleStatus defines the observed status of the Console. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 5.1.28. .status.conditions Description conditions is a list of conditions and their status Type array 5.1.29. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 5.1.30. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 5.1.31. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 5.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/consoles DELETE : delete collection of Console GET : list objects of kind Console POST : create a Console /apis/operator.openshift.io/v1/consoles/{name} DELETE : delete a Console GET : read the specified Console PATCH : partially update the specified Console PUT : replace the specified Console /apis/operator.openshift.io/v1/consoles/{name}/status GET : read status of the specified Console PATCH : partially update status of the specified Console PUT : replace status of the specified Console 5.2.1. /apis/operator.openshift.io/v1/consoles Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Console Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Console Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleList schema 401 - Unauthorized Empty HTTP method POST Description create a Console Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body Console schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 202 - Accepted Console schema 401 - Unauthorized Empty 5.2.2. /apis/operator.openshift.io/v1/consoles/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the Console Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Console Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Console Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Console Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Console Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Console schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty 5.2.3. /apis/operator.openshift.io/v1/consoles/{name}/status Table 5.22. Global path parameters Parameter Type Description name string name of the Console Table 5.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Console Table 5.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.25. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Console Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.27. Body parameters Parameter Type Description body Patch schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Console Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body Console schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/console-operator-openshift-io-v1
function::tcpmib_get_state
function::tcpmib_get_state Name function::tcpmib_get_state - Get a socket's state Synopsis Arguments sk pointer to a struct sock Description Returns the sk_state from a struct sock.
[ "tcpmib_get_state:long(sk:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcpmib-get-state
Chapter 7. Monitoring disaster recovery health
Chapter 7. Monitoring disaster recovery health 7.1. Enable monitoring for disaster recovery Use this procedure to enable basic monitoring for your disaster recovery setup. Procedure On the Hub cluster, open a terminal window Add the following label to openshift-operator namespace. Note You must always add this label for Regional-DR solution. 7.2. Enabling disaster recovery dashboard on Hub cluster This section guides you to enable the disaster recovery dashboard for advanced monitoring on the Hub cluster. For Regional-DR, the dashboard shows monitoring status cards for operator health, cluster health, metrics, alerts and application count. For Metro-DR, you can configure the dashboard to only monitor the ramen setup health and application count. Prerequisites Ensure that you have already installed the following OpenShift Container Platform version 4.17 and have administrator privileges. ODF Multicluster Orchestrator with the console plugin enabled. Red Hat Advanced Cluster Management for Kubernetes 2.11 (RHACM) from Operator Hub. For instructions on how to install, see Installing RHACM . Ensure you have enabled observability on RHACM. See Enabling observability guidelines . Procedure On the Hub cluster, open a terminal window and perform the steps. Create the configmap file named observability-metrics-custom-allowlist.yaml . You can use the following YAML to list the disaster recovery metrics on Hub cluster. For details, see Adding custom metrics . To know more about ramen metrics, see Disaster recovery metrics . In the open-cluster-management-observability namespace, run the following command: After observability-metrics-custom-allowlist yaml is created, RHACM starts collecting the listed OpenShift Data Foundation metrics from all the managed clusters. To exclude a specific managed cluster from collecting the observability data, add the following cluster label to the clusters: observability: disabled . 7.3. Viewing health status of disaster recovery replication relationships Prerequisites Ensure that you have enabled the disaster recovery dashboard for monitoring. For instructions, see chapter Enabling disaster recovery dashboard on Hub cluster . Procedure On the Hub cluster, ensure All Clusters option is selected. Refresh the console to make the DR monitoring dashboard tab accessible. Navigate to Data Services and click Data policies . On the Overview tab, you can view the health status of the operators, clusters and applications. Green tick indicates that the operators are running and available.. Click the Disaster recovery tab to view a list of DR policy details and connected applications. 7.4. Disaster recovery metrics These are the ramen metrics that are scrapped by prometheus. ramen_last_sync_timestamp_seconds ramen_policy_schedule_interval_seconds ramen_last_sync_duration_seconds ramen_last_sync_data_bytes ramen_workload_protection_status Run these metrics from the Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed. 7.4.1. Last synchronization timestamp in seconds This is the time in seconds which gives the time of the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_timestamp_seconds Metrics type Gauge Labels ObjType : Type of the object, here its DRPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Policyname : Name of the DRPolicy SchedulingInterval : Scheduling interval value from DRPolicy Metric value Value is set as Unix seconds which is obtained from lastGroupSyncTime from DRPC status. 7.4.2. Policy schedule interval in seconds This gives the scheduling interval in seconds from DRPolicy. Metric name ramen_policy_schedule_interval_seconds Metrics type Gauge Labels Policyname : Name of the DRPolicy Metric value This is set to a scheduling interval in seconds which is taken from DRPolicy. 7.4.3. Last synchronization duration in seconds This represents the longest time taken to sync from the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_duration_seconds Metrics type Gauge Labels obj_type : Type of the object, here it is DRPC obj_name : Name of the object, here it is DRPC-Name obj_namespace : DRPC namespace scheduling_interval : Scheduling interval value from DRPolicy Metric value The value is taken from lastGroupSyncDuration from DRPC status. 7.4.4. Total bytes transferred from most recent synchronization This value represents the total bytes transferred from the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_data_bytes Metrics type Gauge Labels obj_type : Type of the object, here it is DRPC obj_name : Name of the object, here it is DRPC-Name obj_namespace : DRPC namespace scheduling_interval : Scheduling interval value from DRPolicy Metric value The value is taken from lastGroupSyncBytes from DRPC status. 7.4.5. Workload protection status This value provides the application protection status per application that is DR protected. Metric name ramen_workload_protection_status Metrics type Gauge Labels ObjType : Type of the object, here its DRPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Metric value The value is either a "1" or a "0", where "1" indicates application DR protection is healthy and a "0" indicates application protection degraged and potentially unprotected. 7.5. Disaster recovery alerts This section provides a list of all supported alerts associated with Red Hat OpenShift Data Foundation within a disaster recovery environment. Recording rules Record: ramen_sync_duration_seconds Expression Purpose The time interval between the volume group's last sync time and the time now in seconds. Record: ramen_rpo_difference Expression Purpose The difference between the expected sync delay and the actual sync delay taken by the volume replication group. Record: count_persistentvolumeclaim_total Expression Purpose Sum of all PVC from the managed cluster. Alerts Alert: VolumeSynchronizationDelay Impact Critical Purpose Actual sync delay taken by the volume replication group is thrice the expected sync delay. YAML Alert: VolumeSynchronizationDelay Impact Warning Purpose Actual sync delay taken by the volume replication group is twice the expected sync delay. YAML Alert: WorkloadUnprotected Impact Warning Purpose Application protection status is degraded for more than 10 minutes YAML
[ "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist namespace: open-cluster-management-observability data: metrics_list.yaml: | names: - ceph_rbd_mirror_snapshot_sync_bytes - ceph_rbd_mirror_snapshot_snapshots matches: - __name__=\"csv_succeeded\",exported_namespace=\"openshift-dr-system\",name=~\"odr-cluster-operator.*\" - __name__=\"csv_succeeded\",exported_namespace=\"openshift-operators\",name=~\"volsync.*\"", "oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml", "sum by (obj_name, obj_namespace, obj_type, job, policyname)(time() - (ramen_last_sync_timestamp_seconds > 0))", "ramen_sync_duration_seconds{job=\"ramen-hub-operator-metrics-service\"} / on(policyname, job) group_left() (ramen_policy_schedule_interval_seconds{job=\"ramen-hub-operator-metrics-service\"})", "count(kube_persistentvolumeclaim_info)", "alert: VolumeSynchronizationDelay expr: ramen_rpo_difference >= 3 for: 5s labels: severity: critical annotations: description: \"The syncing of volumes is exceeding three times the scheduled snapshot interval, or the volumes have been recently protected. (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }})\" alert_type: \"DisasterRecovery\"", "alert: VolumeSynchronizationDelay expr: ramen_rpo_difference > 2 and ramen_rpo_difference < 3 for: 5s labels: severity: warning annotations: description: \"The syncing of volumes is exceeding two times the scheduled snapshot interval, or the volumes have been recently protected. (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }})\" alert_type: \"DisasterRecovery\"", "alert: WorkloadUnprotected expr: ramen_workload_protection_status == 0 for: 10m labels: severity: warning annotations: description: \"Workload is not protected for disaster recovery (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}).\" alert_type: \"DisasterRecovery\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/monitoring_disaster_recovery_health
Chapter 2. Installing a cluster quickly on RHV
Chapter 2. Installing a cluster quickly on RHV You can quickly install a default, non-customized, OpenShift Container Platform cluster on a Red Hat Virtualization (RHV) cluster, similar to the one shown in the following diagram. The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster. To install a default cluster, you prepare the environment, run the installation program and answer its prompts. Then, the installation program creates the OpenShift Container Platform cluster. For an alternative to installing a default cluster, see Installing a cluster with customizations . Note This installation program is available for Linux and macOS only. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.12 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . For affinity group support: Three or more hosts in the RHV cluster. If necessary, you can disable affinity groups. For details, see Example: Removing all affinity groups for a non-production lab setup in Installing a cluster on RHV with customizations In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. Additional resources Example: Removing all affinity groups for a non-production lab setup . 2.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.12. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 2.5. Preparing the network environment on RHV Configure two static IP addresses for the OpenShift Container Platform cluster and create DNS entries using these addresses. Procedure Reserve two static IP addresses On the network where you plan to install OpenShift Container Platform, identify two static IP addresses that are outside the DHCP lease pool. Connect to a host on this network and verify that each of the IP addresses is not in use. For example, use Address Resolution Protocol (ARP) to check that none of the IP addresses have entries: USD arp 10.35.1.19 Example output 10.35.1.19 (10.35.1.19) -- no entry Reserve two static IP addresses following the standard practices for your network environment. Record these IP addresses for future reference. Create DNS entries for the OpenShift Container Platform REST API and apps domain names using this format: api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2 1 For <cluster-name> , <base-domain> , and <ip-address> , specify the cluster name, base domain, and static IP address of your OpenShift Container Platform API. 2 Specify the cluster name, base domain, and static IP address of your OpenShift Container Platform apps for Ingress and the load balancer. For example: api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20 2.6. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 2.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Open the ovirt-imageio port to the Manager from the machine running the installer. By default, the port is 54322 . Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Respond to the installation program prompts. Optional: For SSH Public Key , select a password-less public key, such as ~/.ssh/id_rsa.pub . This key authenticates connections with the new OpenShift Container Platform cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your ssh-agent process uses. For Platform , select ovirt . For Engine FQDN[:PORT] , enter the fully qualified domain name (FQDN) of the RHV environment. For example: rhv-env.virtlab.example.com:443 The installation program automatically generates a CA certificate. For Would you like to use the above certificate to connect to the Manager? , answer y or N . If you answer N , you must install OpenShift Container Platform in insecure mode. For Engine username , enter the user name and profile of the RHV administrator using this format: <username>@<profile> 1 1 For <username> , specify the user name of an RHV administrator. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For example: admin@internal . For Engine password , enter the RHV admin password. For Cluster , select the RHV cluster for installing OpenShift Container Platform. For Storage domain , select the storage domain for installing OpenShift Container Platform. For Network , select a virtual network that has access to the RHV Manager REST API. For Internal API Virtual IP , enter the static IP address you set aside for the cluster's REST API. For Ingress virtual IP , enter the static IP address you reserved for the wildcard apps domain. For Base Domain , enter the base domain of the OpenShift Container Platform cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: virtlab.example.com For Cluster Name , enter the name of the cluster. For example, my-cluster . Use cluster name from the externally registered/resolvable DNS entries you created for the OpenShift Container Platform REST API and apps domain names. The installation program also gives this name to the cluster in the RHV environment. For Pull Secret , copy the pull secret from the pull-secret.txt file you downloaded earlier and paste it here. You can also get a copy of the same pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation. 2.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> To learn more, see Getting started with the OpenShift CLI . 2.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 2.12. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A Troubleshooting If the installation fails, the installation program times out and displays an error message. To learn more, see Troubleshooting installation issues . 2.13. Accessing the OpenShift Container Platform web console on RHV After the OpenShift Container Platform cluster initializes, you can log in to the OpenShift Container Platform web console. Procedure Optional: In the Red Hat Virtualization (RHV) Administration Portal, open Compute Cluster . Verify that the installation program creates the virtual machines. Return to the command line where the installation program is running. When the installation program finishes, it displays the user name and temporary password for logging into the OpenShift Container Platform web console. In a browser, open the URL of the OpenShift Container Platform web console. The URL uses this format: 1 For <clustername>.<basedomain> , specify the cluster name and base domain. For example: 2.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.15. Troubleshooting common issues with installing on Red Hat Virtualization (RHV) Here are some common issues you might encounter, along with proposed causes and solutions. 2.15.1. CPU load increases and nodes go into a Not Ready state Symptom : CPU load increases significantly and nodes start going into a Not Ready state. Cause : The storage domain latency might be too high, especially for control plane nodes. Solution : Make the nodes ready again by restarting the kubelet service: USD systemctl restart kubelet Inspect the OpenShift Container Platform metrics service, which automatically gathers and reports on some valuable data such as the etcd disk sync duration. If the cluster is operational, use this data to help determine whether storage latency or throughput is the root issue. If so, consider using a storage resource that has lower latency and higher throughput. To get raw metrics, enter the following command as kubeadmin or user with cluster-admin privileges: USD oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics To learn more, see Exploring Application Endpoints for the purposes of Debugging with OpenShift 4.x 2.15.2. Trouble connecting the OpenShift Container Platform cluster API Symptom : The installation program completes but the OpenShift Container Platform cluster API is not available. The bootstrap virtual machine remains up after the bootstrap process is complete. When you enter the following command, the response will time out. USD oc login -u kubeadmin -p *** <apiurl> Cause : The bootstrap VM was not deleted by the installation program and has not released the cluster's API IP address. Solution : Use the wait-for subcommand to be notified when the bootstrap process is complete: USD ./openshift-install wait-for bootstrap-complete When the bootstrap process is complete, delete the bootstrap virtual machine: USD ./openshift-install destroy bootstrap 2.16. Post-installation tasks After the OpenShift Container Platform cluster initializes, you can perform the following tasks. Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in OpenShift Container Platform. Optional: Remove the kubeadmin user. Instead, use the authentication provider to create a user with cluster-admin privileges.
[ "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "rhv-env.virtlab.example.com:443", "<username>@<profile> 1", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_rhv/installing-rhv-default
Chapter 2. Preparing overcloud nodes
Chapter 2. Preparing overcloud nodes The scenario described in this chapter consists of six nodes in the Overcloud: Three Controller nodes with high availability. Three Compute nodes. The director integrates a separate Ceph Storage cluster with its own nodes into the overcloud. You manage this cluster independently from the overcloud. For example, you scale the Ceph Storage cluster using the Ceph management tools, not through the OpenStack Platform director. For more information, see the Red Hat Ceph Storage documentation library. 2.1. Configuring the existing Ceph Storage cluster Create the following pools in your Ceph cluster relevant to your environment: volumes : Storage for OpenStack Block Storage (cinder) images : Storage for OpenStack Image Storage (glance) vms : Storage for instances backups : Storage for OpenStack Block Storage Backup (cinder-backup) metrics : Storage for OpenStack Telemetry Metrics (gnocchi) Use the following commands as a guide: If your overcloud deploys the Shared File System (manila) backed by CephFS, create CephFS data and metadata pools as well: Replace PGNUM with the number of placement groups. Red Hat recommends approximately 100 placement groups per OSD. For example, the total number of OSDs multiplied by 100, divided by the number of replicas ( osd pool default size ). You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value. Create a client.openstack user in your Ceph cluster with the following capabilities: cap_mgr: "allow *" cap_mon: profile rbd cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics Use the following command as a guide: Note the Ceph client key created for the client.openstack user: The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw== , is your Ceph client key. If your overcloud deploys the Shared File System backed by CephFS, create the client.manila user in your Ceph cluster with the following capabilities: cap_mds: allow * cap_mgr: allow * cap_mon: allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create" cap_osd: allow rw Use the following command as a guide: Note the manila client name and the key value to use in overcloud deployment templates: Note the file system ID of your Ceph Storage cluster. This value is specified with the fsid setting in the configuration file of your cluster (in the [global] section): Note For more information about the Ceph Storage cluster configuration file, see Ceph configuration in the Red Hat Ceph Storage Configuration Guide . The Ceph client key and file system ID, as well as the manila client IDS and key, will all be used later in Chapter 3, Integrating with the existing Ceph Storage cluster . 2.2. Initializing the stack user Log into the director host as the stack user and run the following command to initialize your director configuration: This sets up environment variables containing authentication details to access the director's CLI tools. 2.3. Registering nodes A node definition template ( instackenv.json ) is a JSON format file and contains the hardware and power management details for registering nodes. For example: Procedure After you create the inventory file, save the file to the home directory of the stack user ( /home/stack/instackenv.json ). Initialize the stack user, then import the instackenv.json inventory file into the director: The openstack overcloud node import command imports the inventory file and registers each node with the director. Assign the kernel and ramdisk images to each node: The nodes are now registered and configured in the director. 2.4. Manually tagging the nodes After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles. To inspect and tag new nodes, complete the following steps: Trigger hardware introspection to retrieve the hardware attributes of each node: The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state. The --provide option resets all nodes to an active state after introspection. Important Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes. Retrieve a list of your nodes to identify their UUIDs: Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile. Note As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data. For example, to tag three nodes to use the control profile and another three nodes to use the compute profile, run: The addition of the profile option tags the nodes into each respective profiles. Note As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.
[ "ceph osd pool create volumes PGNUM ceph osd pool create images PGNUM ceph osd pool create vms PGNUM ceph osd pool create backups PGNUM ceph osd pool create metrics PGNUM", "ceph osd pool create manila_data PGNUM ceph osd pool create manila_metadata PGNUM", "ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'", "ceph auth list [client.openstack] key = AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw== caps mgr = \"allow *\" caps mon = \"profile rbd\" caps osd = \"profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics\"", "ceph auth add client.manila mon 'allow r, allow command \"auth del\", allow command \"auth caps\", allow command \"auth get\", allow command \"auth get-or-create\"' osd 'allow rw' mds 'allow *' mgr 'allow *'", "ceph auth get-key client.manila AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==", "[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19", "source ~/stackrc", "{ \"nodes\":[ { \"mac\":[ \"bb:bb:bb:bb:bb:bb\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"cc:cc:cc:cc:cc:cc\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" } { \"mac\":[ \"ff:ff:ff:ff:ff:ff\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" } { \"mac\":[ \"gg:gg:gg:gg:gg:gg\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" } ] }", "source ~/stackrc openstack overcloud node import ~/instackenv.json", "openstack overcloud node configure <node>", "openstack overcloud node introspect --all-manageable --provide", "openstack baremetal node list", "openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities=\" profile:control ,boot_option:local\" openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities=\" profile:control ,boot_option:local\" openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities=\" profile:control ,boot_option:local\" openstack baremetal node set 484587b2-b3b3-40d5-925b-a26a2fa3036f --property capabilities=\" profile:compute ,boot_option:local\" openstack baremetal node set d010460b-38f2-4800-9cc4-d69f0d067efe --property capabilities=\" profile:compute ,boot_option:local\" openstack baremetal node set d930e613-3e14-44b9-8240-4f3559801ea6 --property capabilities=\" profile:compute ,boot_option:local\"" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_cluster/integration
Configuring Capsules with a Load Balancer
Configuring Capsules with a Load Balancer Red Hat Satellite 6.11 Distributing load between Capsule Servers Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_capsules_with_a_load_balancer/index
Chapter 3. Migrating Operator deployments on Openshift
Chapter 3. Migrating Operator deployments on Openshift To adapt to the revamped server configuration, the Red Hat build of Keycloak Operator was completely recreated. The Operator provides full integration with Red Hat build of Keycloak, but it is not backward compatible with the Red Hat Single Sign-On 7.6 Operator. Using the new Operator requires creating a new Red Hat build of Keycloak deployment. For full details, see the Operator Guide . 3.1. Prerequisites The instance of Red Hat Single Sign-On 7.6 was shut down so that it does not use the same database instance that will be used by Red Hat build of Keycloak . In case the unsupported embedded database (that is managed by the Red Hat Single Sign-On 7.6 Operator)) was used, it has been converted to an external database that is provisioned by the user. Database backup was created. You reviewed the Release Notes . 3.2. Migration process Install Red Hat build of Keycloak Operator to the namespace. Create new CRs and related Secrets. Manually migrate your Red Hat Single Sign-On 7.6 configuration to your new Keycloak CR. If custom providers were used, migrate them and create a custom Red Hat build of Keycloak container image to include them. If custom themes were used, migrate them and create a custom Red Hat build of Keycloak container image to include them. 3.3. Migrating Keycloak CR Keycloak CR now supports all server configuration options. All relevant options are available as first class citizen fields directly under the spec of the CR. All options in the CR follow the same naming conventions as the server options making the experience between bare metal and Operator deployments seamless. Additionally, you can define any options that are missing from the CR in the additionalOptions field such as SPI providers configuration. Another option is to use podTemplate , a Technology Preview field, to modify the raw Kubernetes deployment pod template in case a supported alternative does not exist as a first class citizen field in the CR. The following shows an example Keycloak CR to deploy Red Hat build of Keycloak through the Operator: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org additionalOptions: - name: spi-connections-http-client-default-connection-pool-size value: 20 Notice the resemblance with CLI configuration: ./kc.sh start --db=postgres --db-url-host=postgres-db --db-username=user --db-password=pass --https-certificate-file=mycertfile --https-certificate-key-file=myprivatekey --hostname=test.keycloak.org --spi-connections-http-client-default-connection-pool-size=20 Additional resources Basic Keycloak deployment Advanced configuration 3.3.1. Migrating database configuration Red Hat build of Keycloak can use the same database instance as was previously used by Red Hat Single Sign-On 7.6. The database schema will be migrated automatically the first time Red Hat build of Keycloak connects to it. Warning Migrating the embedded database managed by Red Hat Single Sign-On 7.6 Operator is not supported. In the Red Hat Single Sign-On 7.6 Operator, the external database connection was configured using a Secret, for example: apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret namespace: keycloak labels: app: sso stringData: POSTGRES_DATABASE: kc-db-name POSTGRES_EXTERNAL_ADDRESS: my-postgres-hostname POSTGRES_EXTERNAL_PORT: 5432 POSTGRES_USERNAME: user POSTGRES_PASSWORD: pass type: Opaque In Red Hat build of Keycloak , the database is configured directly in the Keycloak CR with credentials referenced as Secrets, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres host: my-postgres-hostname port: 5432 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password ... apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret stringData: username: "user" password: "pass" type: Opaque 3.3.1.1. Supported database vendors Red Hat Single Sign-On 7.6 Operator supported only PostgreSQL databases, but the Red Hat build of Keycloak Operator supports all database vendors that are supported by the server. 3.3.2. Migrating TLS configuration Red Hat Single Sign-On 7.6 Operator by default configured the server to use the TLS Secret generated by OpenShift CA. Red Hat build of Keycloak Operator does not make any assumptions around TLS to meet production best practices and requires users to provide their own TLS certificate and key pair, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: tlsSecret: example-tls-secret ... The expected format of the secret referred to in tlsSecret should use the standard Kubernetes TLS Secret ( kubernetes.io/tls ) type. The Red Hat Single Sign-On 7.6 Operator used the reencrypt TLS termination strategy by default on Route. Red Hat build of Keycloak Operator uses the passthrough strategy by default. Additionally, the Red Hat Single Sign-On 7.6 Operator supported configuring TLS termination. Red Hat build of Keycloak Operator does not support TLS termination in the current release. If the default Operator-managed Route does not meet desired TLS configuration, a custom Route needs to be created by the user and the default one disabled as: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false ... 3.3.3. Using a custom image for extensions To reflect best practices and support immutable containers, the Red Hat build of Keycloak Operator no longer supports specifying extensions in the Keycloak CR. In order to deploy an extension, an optimized custom image must be built. Keycloak CR now includes a dedicated field for specifying Red Hat build of Keycloak images, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: image: quay.io/my-company/my-keycloak:latest ... Note When specifying a custom image, the Operator assumes it is already optimized and does not perform the costly optimization at each server start. Additional resources Using custom Keycloak images in the Operator Creating a customized and optimized container image 3.3.4. Upgrade strategy option removed The Red Hat Single Sign-On 7.6 Operator supported recreate and rolling strategies when performing a server upgrade. This approach was not practical. It was up to the user to choose if the Red Hat Single Sign-On 7.6 Operator should scale down the deployment before performing an upgrade and database migration. It was not clear to the users when the rolling strategy could be safely used. Therefore, this option was removed in the Red Hat build of Keycloak Operator and it always implicitly performs the recreate strategy, which scales down the whole deployment before creating Pods with the new server container image to ensure only a single server version accesses the database. 3.3.5. Health endpoint exposed by default The Red Hat build of Keycloak configures the server to expose a simple health endpoint by default that is used by OpenShift probes. The endpoint does not expose any security sensitive data about deployment but it is accessible without any authentication. As an alternative, <your-server-context-root> / health endpoint can be blocked on a custom Route. For example, Create Keycloak configured for TLS edge termination. Make sure to omit the tlsSecret field: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: proxy: Headers:xforwarded hostname: hostname: example.com ... Create a blocking Route to prohibit access to the health endpoint: kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-kc-block-health annotations: haproxy.router.openshift.io/rewrite-target: /404 spec: host: example.com path: /health to: kind: Service name: example-kc-service port: targetPort: http tls: termination: edge Note Path-based Routes require TLS termination to be configured for either edge or reencrypt. By default, the Operator uses passthrough. 3.3.6. Migrating advanced deployment options using Pod templates The Red Hat Single Sign-On 7.6 Operator exposed multiple low-level fields for deployment configuration, such as volumes. Red Hat build of Keycloak Operator is more opinionated and does not expose most of these fields. However, it is still possible to configure any desired deployment fields specified as the podTemplate , for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: foo: "bar" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: test-secret ... Note The spec.unsupported.podTemplate field offers only limited support as it exposes low-level configuration where full functionality has not been tested under all conditions. Whenever possible, use the fully supported first class citizen fields in the top level of the CR spec. For example, instead of spec.unsupported.podTemplate.spec.imagePullSecrets , use directly spec.imagePullSecrets . 3.3.7. Connecting to an external instance is no longer supported The Red Hat Single Sign-On 7.6 Operator supported connecting to an external instance of Red Hat Single Sign-On 7.6. For example, creating clients within an existing realm through Client CRs is no longer supported in the Red Hat build of Keycloak Operator. 3.3.8. Migrating Horizontal Pod Autoscaler enabled deployments To use a Horizontal Pod Autoscaler (HPA) with Red Hat Single Sign-On 7.6, it was necessary to set the disableReplicasSyncing: true field in the Keycloak CR and scale the server StatefulSet. This is no longer necessary as the Keycloak CR in Red Hat build of Keycloak Operator can be scaled directly by an HPA. 3.4. Migrating the Keycloak realm CR The Realm CR was replaced by the Realm Import CR, which offers similar functionality and has a similar schema. The Realm Import CR offers only Realm bootstrapping and as such no longer supports Realm deletion. It also does not support updates, similarly to the Realm CR. Full Realm representation is now included in the Realm Import CR, in comparison to the Realm CR that offered only a few selected fields. Example of Red Hat Single Sign-On 7.6 Realm CR: apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: example-keycloakrealm spec: instanceSelector: matchLabels: app: sso realm: id: "basic" realm: "basic" enabled: True displayName: "Basic Realm" Example of corresponding Red Hat build of Keycloak Realm Import CR: apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: example-keycloakrealm spec: keycloakCRName: example-kc realm: id: "basic" realm: "basic" enabled: True displayName: "Basic Realm" Additional resources Realm Import 3.5. Removed CRs The Client and User CRs were removed from Red Hat build of Keycloak Operator. The lack of these CRs can be partially mitigated by the new Realm Import CR. Adding support for Client CRs is on the road-map for a future Red Hat build of Keycloak release, while User CRs are not currently a planned feature.
[ "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org additionalOptions: - name: spi-connections-http-client-default-connection-pool-size value: 20", "./kc.sh start --db=postgres --db-url-host=postgres-db --db-username=user --db-password=pass --https-certificate-file=mycertfile --https-certificate-key-file=myprivatekey --hostname=test.keycloak.org --spi-connections-http-client-default-connection-pool-size=20", "apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret namespace: keycloak labels: app: sso stringData: POSTGRES_DATABASE: kc-db-name POSTGRES_EXTERNAL_ADDRESS: my-postgres-hostname POSTGRES_EXTERNAL_PORT: 5432 POSTGRES_USERNAME: user POSTGRES_PASSWORD: pass type: Opaque", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres host: my-postgres-hostname port: 5432 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret stringData: username: \"user\" password: \"pass\" type: Opaque", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: tlsSecret: example-tls-secret", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: image: quay.io/my-company/my-keycloak:latest", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: proxy: Headers:xforwarded hostname: hostname: example.com", "kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-kc-block-health annotations: haproxy.router.openshift.io/rewrite-target: /404 spec: host: example.com path: /health to: kind: Service name: example-kc-service port: targetPort: http tls: termination: edge", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: foo: \"bar\" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: test-secret", "apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: example-keycloakrealm spec: instanceSelector: matchLabels: app: sso realm: id: \"basic\" realm: \"basic\" enabled: True displayName: \"Basic Realm\"", "apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: example-keycloakrealm spec: keycloakCRName: example-kc realm: id: \"basic\" realm: \"basic\" enabled: True displayName: \"Basic Realm\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/migration_guide/migrating-operator
Chapter 7. Hardware Enablement
Chapter 7. Hardware Enablement cpuid is now available With this update, the cpuid utility is available in Red Hat Enterprise Linux. This utility dumps detailed information about the CPU(s) gathered from the CPUID instruction, and also determines the exact model of CPU(s). It supports Intel, AMD, and VIA CPUs. (BZ#1316998) Support for RealTek RTS5250S SD4.0 Controllers The Realtek RTS5205 card reader controllers have been added to the kernel. (BZ#1167938)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/new_features_hardware_enablement
Chapter 5. Creating a business process in Business Central
Chapter 5. Creating a business process in Business Central The process designer is the Red Hat Process Automation Manager process modeler. The output of the modeler is a BPMN 2.0 process definition file. The definition is used as input for the Red Hat Process Automation Manager process engine, which creates a process instance based on the definition. The procedures in this section provide a general overview of how to create a simple business process. For a more detailed business process example, see Getting started with process services . Prerequisites You have created or imported a Red Hat Process Automation Manager project. For more information about creating projects, see Managing projects in Business Central . You have created the required users. User privileges and settings are controlled by the roles assigned to a user and the groups that a user belongs to. For more information about creating users, see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . Procedure In Business Central, go to Menu Design Projects . Click the project name to open the project's asset list. Click Add Asset Business Process . In the Create new Business Process wizard, enter the following values: Business Process : New business process name Package : Package location for your new business process, for example com.myspace.myProject Click Ok to open the process designer. In the upper-right corner, click the Properties icon and add your business process property information, such as process data and variables: Scroll down and expand Process Data . Click to Process Variables and define the process variables that you want to use in your business process. Table 5.1. General process properties Label Description Name Enter the name of the process. Documentation Describes the process. The text in this field is included in the process documentation, if applicable. ID Enter an identifier for this process, such as orderItems . Package Enter the package location for this process in your Red Hat Process Automation Manager project, such as org.acme . ProcessType Specify whether the process is public or private (or null, if not applicable). Version Enter the artifact version for the process. Ad hoc Select this option if this process is an ad hoc sub-process. Process Instance Description Enter a description of the purpose of the process. Imports Click to open the Imports window and add any data object classes required for your process. Executable Select this option to make the process executable part of your Red Hat Process Automation Manager project. SLA Due Date Enter the service level agreement (SLA) expiration date. Process Variables Add any process variables for the process. Process variables are visible within the specific process instance. Process variables are initialized at process creation and destroyed on process completion. Variable Tags provide greater control over variable behavior, for example whether the variable is required or readonly . For more information about variable tags, see Chapter 6, Variables . Metadata Attributes Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. Global Variables Add any global variables for the process. Global variables are visible to all process instances and assets in a project. Global variables are typically used by business rules and constraints, and are created dynamically by the rules or constraints. The Metadata Attributes entries are similar to Process Variables tags in that they enable new metaData extensions to BPMN diagrams. However, process variable tags modify the behavior of specific process variables, such as whether a certain variable is required or readonly , whereas metadata attributes are key-value definitions that modify the behavior of the overall process. For example, the following custom metadata attribute riskLevel and value low in a BPMN process correspond to a custom event listener for starting the process: Figure 5.1. Example metadata attribute and value in the BPMN modeler Example metadata attribute and value in the BPMN file <bpmn2:process id="approvals" name="approvals" isExecutable="true" processType="Public"> <bpmn2:extensionElements> <tns:metaData name="riskLevel"> <tns:metaValue><![CDATA[low]]></tns:metaValue> </tns:metaData> </bpmn2:extensionElements> Example event listener with metadata value public class MyListener implements ProcessEventListener { ... @Override public void beforeProcessStarted(ProcessStartedEvent event) { Map < String, Object > metadata = event.getProcessInstance().getProcess().getMetaData(); if (metadata.containsKey("low")) { // Implement some action for that metadata attribute } } } In the process designer canvas, use the left toolbar to drag and drop BPMN components to define your business process logic, connections, events, tasks, or other elements. Note A task and event in Red Hat Process Automation Manager expect one incoming and one outgoing flow. If you want to design a business process with multiple incoming and multiple outgoing flows, then consider redesigning the business process using gateways. Using gateways makes the logic apparent, which a sequence flow is executing. Therefore, gateways are considered as a best practice for multiple connections. However, if it is a must to use multiple connections for a task or an event, then you must set the JVM (Java virtual machine) system property jbpm.enable.multi.con to true . When Business Central and KIE Server run on different servers, then ensure that both of them contains the jbpm.enable.multi.con system property as enabled otherwise, the process engine throws an exception. After you add and define all components of the business process, click Save to save the completed business process. 5.1. Creating business rules tasks Business rules tasks are used to make decisions through a Decision Model and Notation (DMN) model or rule flow group. Procedure Create a business process. In the process designer, select the Activities tool from the tool palette. Select Business Rule . Click a blank area of the process designer canvas. If necessary, in the upper-right corner of the screen, click the Properties icon. Add or define the task information listed in the following table as required. Table 5.2. Business rule task parameters Label Description Name The name of the business rule task. You can also double-click the business rule task shape to edit the name. Rule Language The output language for the task. Select Decision Model and Notation (DMN) or Drools (DRL). Rule Flow Group The rule flow group associated with this business task. Select a rule flow group from the list or specify a new rule flow group. On Entry Action A Java, JavaScript, or MVEL script that specifies an action at the start of the task. On Exit Action A Java, JavaScript, or MVEL script that specifies an action at the end of the task. Is Async Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service. AdHoc Autostart Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management. SLA Due Date The date that the service level agreement (SLA) expires. Assignments Click to add local variables. Metadata Attributes Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. The Metadata Attributes enable the new metaData extensions to BPMN diagrams and modify the behavior of the overall task. Click Save . 5.2. Creating script tasks Script tasks are used to execute a piece of code written in Java, JavaScript, or MVEL. They contain code snippets that specify the action of the script task. You can include global and process variables in your scripts. Note that MVEL accepts any valid Java code and additionally provides support for nested access of parameters. For example, the MVEL equivalent of the Java call person.getName() is person.name . MVEL also provides other improvements over Java and MVEL expressions are generally more convenient for business users. Procedure Create a business process. In the process designer, select the Activities tool from the tool palette. Select Script . Click a blank area of the process designer canvas. If necessary, in the upper-right corner of the screen, click the Properties icon. Add or define the task information listed in the following table as required. Table 5.3. Script task parameters Label Description Name The name of the script task. You can also double-click the script task shape to edit the name. Documentation Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation. Script Enter a script in Java, JavaScript, or MVEL to be executed by the task, and select the script type. Is Async Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service. AdHoc Autostart Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management. Metadata Attributes Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. The Metadata Attributes enable the new metaData extensions to BPMN diagrams and modify the behavior of the overall task. Click Save . 5.3. Creating service tasks A service task is a task that executes an action based on a web service call or in a Java class method. Examples of service tasks include sending emails and logging messages when performing these tasks. You can define the parameters (input) and results (output) associated with a service task. You can also define wrapped parameters that contain all inputs into a single object. To define wrapped parameters, create a new work item handler using Wrapped` : `True in the data assignment. A Service Task should have one incoming connection and one outgoing connection. Procedure In Business Central, select the Admin icon in the top-right corner of the screen and select Artifacts . Click Upload to open the Artifact upload window. Choose the .jar file and click . Important The .jar file contains data types (data objects) and Java classes for web service and Java service tasks respectively. Create a project you want to use. Go to your project Settings Dependencies . Click Add from repository , locate the uploaded .jar file, and click Select . Open your project Settings Work Item Handler . Enter the following values in the given fields: Name - Service Task Value - new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession, classLoader) Save the project. Example of creating web service task The default implementation of a service task in the BPMN2 specification is a web service. The web service support is based on the Apache CXF dynamic client, which provides a dedicated service task handler that implements the WorkItemHandler interface: org.jbpm.process.workitem.bpmn2.ServiceTaskHandler To create a service task using web service, you must configure the web service: Create a business process. If necessary, in the upper-right corner of the screen, click the Properties icon. Click in the Imports property to open the Imports window. Click +Add to the WSDL Imports to import the required WSDL (Web Services Description Language) values. For example: Location : http://localhost:8080/sample-ws-1/SimpleService?wsdl The location points to the WSDL file of your service. Namespace : http://bpmn2.workitem.process.jbpm.org/ The namespace must match targetNamespace from your WSDL file. In the process designer, select the Activities tool from the tool palette. Select Service Task . Click a blank area of the process designer canvas. Add or define the task information listed in the following table as required. Table 5.4. Web service task parameters Label Description Name The name of the service task. You can also double-click the service task shape to edit the name. Documentation Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation. Implementation Specify a web service. Interface The service used to implement the script, such as CountriesPortService . Operation The operation that is called by the interface, such as getCountry . Assignments Click to add local variables. AdHoc Autostart Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management. Is Async Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service. Is Multiple Instance Select if this task has multiple instances. MI Execution mode Select if the multiple instances execute in parallel or sequentially. MI Collection input Specify a variable that represents a collection of elements for which new instances are created, such as inputCountryNames . MI Data Input Specify the input data assignment that is transferred to a web service, such as Parameter . MI Collection output The array list in which values returned from the web service task is stored, such as outputCountries . MI Data Output Specify the output data assignment for the web service task, which stores the result of class execution on the server, such as Result . MI Completion Condition (mvel) Specify the MVEL expression that is evaluated on each completed instance to check if the specified multiple instance node can complete. On Entry Action A Java, JavaScript, or MVEL script that specifies an action at the start of the task. On Exit Action A Java, JavaScript, or MVEL script that specifies an action at the end of the task. SLA Due Date The date that the service level agreement (SLA) expires. Metadata Attributes Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. The Metadata Attributes enable the new metaData extensions to BPMN diagrams and modify the behavior of the overall task. Example of creating Java service task When you create a service task using Java method, then the method can only contain one parameter and returns a single value. To create a service task using a Java method, you must add the Java class to the dependencies of the project: Create a business process. In the process designer, select the Activities tool from the tool palette. Select Service Task . Click a blank area of the process designer canvas. If necessary, in the upper-right corner of the screen, click the Properties icon. Add or define the task information listed in the following table as required. Table 5.5. Java service task parameters Label Description Name The name of the service task. You can also double-click the service task shape to edit the name. Documentation Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation. Implementation Specify the task is implemented in Java. Interface The class used to implement the script, such as org.xyz.HelloWorld . Operation The method that is called by the interface, such as sayHello . Assignments Click to add local variables. AdHoc Autostart Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management. Is Async Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service. Is Multiple Instance Select if this task has multiple instances. MI Execution mode Select if the multiple instances execute in parallel or sequentially. MI Collection input Specify a variable that represents a collection of elements for which new instances are created, such as InputCollection . MI Data Input Specify the input data assignment that is transferred to a Java class. For example, you can set the input data assignments as Parameter and ParameterType . ParameterType represents the type of Parameter and sends arguments to the execution of Java method. MI Collection output The array list in which values returned from the Java class is stored, such as OutputCollection . MI Data Output Specify the output data assignment for Java service task, which stores the result of class execution on the server, such as Result . MI Completion Condition (mvel) Specify the MVEL expression that is evaluated on each completed instance to check if the specified multiple instance node can complete. For example, OutputCollection.size() <= 3 indicates more than three people are not addressed. On Entry Action A Java, JavaScript, or MVEL script that specifies an action at the start of the task. On Exit Action A Java, JavaScript, or MVEL script that specifies an action at the end of the task. SLA Due Date The date that the service level agreement (SLA) expires. Metadata Attributes Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. The Metadata Attributes enable the new metaData extensions to BPMN diagrams and modify the behavior of the overall task. Click Save . 5.4. Creating user tasks User tasks are used to include human actions as input to the business process. Procedure Create a business process. In the process designer, select the Activities tool from the tool palette. Select User . Drag and drop a user task onto the process designer canvas. If necessary, in the upper-right corner of the screen, click the Properties icon. Add or define the task information listed in the following table as required. Table 5.6. User task parameters Label Description Name The name of the user task. You can also double-click the user task shape to edit the name. Documentation Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation. Task Name The name of the human task. Subject Enter a subject for the task. Actors The actors responsible for executing the human task. Click Add to add a row then select an actor from the list or click New to add a new actor. Groups The groups responsible for executing the human task. Click Add to add a row then select a group from the list or click New to add a new group. Assignments Local variables for this task. Click to open the Task Data I/O window then add data inputs and outputs as required. You can also add MVEL expressions as data input and output assignments. For more information about the MVEL language, see Language Guide for 2.0 . Reassignments Specify a different actor to complete this task. Notifications Click to specify notifications associated with the task. Is Async Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service. Skippable Select if this task is not mandatory. Priority Specify a priority for the task. Description Enter a description for the human task. Created By The user that created this task. AdHoc Autostart Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management. Multiple Instance Select if this task has multiple instances. On Entry Action A Java, JavaScript, or MVEL script that specifies an action at the start of the task. On Exit Action A Java, JavaScript, or MVEL script that specifies an action at the end of the task. Content The content of the script. SLA Due Date The date that the service level agreement (SLA) expires. Metadata Attributes Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. The Metadata Attributes enable the new metaData extensions to BPMN diagrams and modify the behavior of the overall task. Click Save . 5.4.1. Setting the user task assignment strategy The user task assignment strategy is used to automatically assign the tasks to a suitable user. The assignment strategy allows more efficient task allocation based on the associated properties, such as potential owners, task priority, and task data. org.jbpm.task.assignment.strategy is the system property for the user task assignment strategy in Red Hat Process Automation Manager. You can also explicitly define an assignment strategy for a user task in Business Central. Prerequisites You have created a project in Business Central. You must set the org.jbpm.task.assignment.enabled system property to true . Procedure Create a business process. For more information about creating a business process in Business Central, see Chapter 5, Creating a business process in Business Central . Create a user task. For more information about creating a user task in Business Central, see Section 5.4, "Creating user tasks" . In the upper-right corner of the screen, click the Properties icon. Expand Implementation/Execution and click below to Assignments , to open the Data I/O window. Add a data input with the name AssignmentStrategy , with the type String , and with the constant source, such as the strategy name. Note If AssignmentStrategy is set as null, then no assignment strategy is used for the task. Click Ok . The AssignmentStrategy variable is added as a data input to the user task. 5.5. BPMN2 user task life cycle in process designer You can trigger a user task element during the process instance execution to create a user task. The user task service of the task execution engine executes the user task instance. The process instance continues the execution only when the associated user task is completed or aborted. A user task life cycle is as follows: When a process instance enters a user task element, the user task is in the Created stage. Created stage is a transient stage and the user task enters the Ready stage immediately. The task appears in the task list of all the actors who are allowed to execute the task. When an actor claims the user task, the task becomes Reserved . Note If a user task has a single potential actor, the task is assigned to that actor upon creation. When an actor who claimed the user task starts the execution, the status of the user task changes to InProgress . Once an actor completes the user task, the status changes to Completed or Failed depending on the execution outcome. There are also several other life cycle methods, including: Delegating or forwarding a user task so the user task is assigned to another actor. Revoking a user task, then the user task is no longer claimed by a single actor but is available to all actors who are allowed to take it. Suspending and resuming a user task. Stopping a user task that is in progress. Skipping a user task, in which the execution of the task is suspended. For more information about the user task life cycle, see the Web Services Human Task specification . 5.6. BPMN2 task permission matrix in process designer The user task permission matrix summarizes the actions that are allowed for specific user roles. The user roles are as follows: Potential owner: User who can claim the task, which was claimed earlier and is released and forwarded. The tasks with Ready status can be claimed, and the potential owner becomes the actual owner of the task. Actual owner: User who claims the task and progresses the task to completion or failure. Business administrator: Super user who can modify the status or progress with the task at any point of the task life cycle. The following permission matrix represents the authorizations for all operations that modify a task. + indicates that the user role is allowed to do the specified operation. - indicates that the user role is not allowed to do the specified operation, or the operation does not match with the user's role. Table 5.7. Main operations permissions matrix Operation Potential owner Actual owner Business administrator activate - - + claim + - + complete - + + delegate + + + fail - + + forward + + + nominate - - + release - + + remove - - + resume + + + skip + + + start + + + stop - + + suspend + + + 5.7. Making a copy of a business process You can make a copy of a business process in Business Central and modify the copied process as needed. Procedure In the business process designer, click Copy in the upper-right toolbar. In the Make a Copy window, enter a new name for the copied business process, select the target package, and optionally add a comment. Click Make a Copy . Modify the copied business process as needed and click Save to save the updated business process. 5.8. Resizing elements and using the zoom function to view business processes You can resize individual elements in a business process and zoom in or out to modify the view of your business process. Procedure In the business process designer, select the element and click the red dot in the lower-right corner of the element. Drag the red dot to resize the element. Figure 5.2. Resize an element To zoom in or out to view the entire diagram, click the plus or minus sign on the lower-right side of the canvas. Figure 5.3. Enlarge or shrink a business process 5.9. Generating process documentation in Business Central In the process designer in Business Central, you can view and print a report of the process definition. The process documentation summarizes the components, data, and visual flow of the process in a format (PDF) that you can print and share more easily. Procedure In Business Central, navigate to a project that contains a business process and select the process. In the process designer, click the Documentation tab to view the summary of the process file, and click Print in the top-right corner of the window to print the PDF report. Figure 5.4. Generate process documentation
[ "<bpmn2:process id=\"approvals\" name=\"approvals\" isExecutable=\"true\" processType=\"Public\"> <bpmn2:extensionElements> <tns:metaData name=\"riskLevel\"> <tns:metaValue><![CDATA[low]]></tns:metaValue> </tns:metaData> </bpmn2:extensionElements>", "public class MyListener implements ProcessEventListener { @Override public void beforeProcessStarted(ProcessStartedEvent event) { Map < String, Object > metadata = event.getProcessInstance().getProcess().getMetaData(); if (metadata.containsKey(\"low\")) { // Implement some action for that metadata attribute } } }" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/design-bus-proc
Chapter 1. Overview of AMQ Interconnect
Chapter 1. Overview of AMQ Interconnect AMQ Interconnect is a lightweight AMQP message router for building scalable, available, and performant messaging networks. 1.1. Key features You can use AMQ Interconnect to flexibly route messages between any AMQP-enabled endpoints, including clients, servers, and message brokers. AMQ Interconnect provides the following benefits: Connects clients and message brokers into an internet-scale messaging network with uniform addressing Supports high-performance direct messaging Uses redundant network paths to route around failures Streamlines the management of large deployments 1.2. Supported standards and protocols AMQ Interconnect supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Modern TCP with IPv6 Note The details of distributed transactions (XA) within AMQP are not provided in the 1.0 version of the specification. AMQ Interconnect does not support XA transactions. Additional resources OASIS AMQP 1.0 Specification . 1.3. Supported configurations AMQ Interconnect is supported on Red Hat Enterprise Linux 6, 7, and 8. See Red Hat AMQ 7 Supported Configurations for more information. 1.4. Document conventions In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see The sudo Command .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/overview-router-rhel
4.4. GLOBAL SETTINGS
4.4. GLOBAL SETTINGS The GLOBAL SETTINGS panel is where the you define the networking details for the primary LVS router's public and private network interfaces. Figure 4.3. The GLOBAL SETTINGS Panel The top half of this panel sets up the primary LVS router's public and private network interfaces. These are the interfaces already configured in Section 3.1.1, "Configuring Network Interfaces for Load Balancer Add-On with NAT" . Primary server public IP In this field, enter the publicly routable real IP address for the primary LVS node. Primary server private IP Enter the real IP address for an alternative network interface on the primary LVS node. This address is used solely as an alternative heartbeat channel for the backup router and does not have to correlate to the real private IP address assigned in Section 3.1.1, "Configuring Network Interfaces for Load Balancer Add-On with NAT" . You may leave this field blank, but doing so will mean there is no alternate heartbeat channel for the backup LVS router to use and therefore will create a single point of failure. Note The private IP address is not needed for Direct Routing configurations, as all real servers as well as the LVS directors share the same virtual IP addresses and should have the same IP route configuration. Note The primary LVS router's private IP can be configured on any interface that accepts TCP/IP, whether it be an Ethernet adapter or a serial port. TCP Timeout Enter the time (in seconds) before a TCP session will timeout. The default timeout value is 0. TCP Fin Timeout Enter the time (in seconds) before a TCP session will timeout after receiving a FIN packet. The default timeout value is 0. UDP Timeout Enter the time (in seconds) before a UDP session will timeout. The default timeout value is 0. Use network type Click the NAT button to select NAT routing. Click the Direct Routing button to select direct routing. The three fields deal specifically with the NAT router's virtual network interface connecting the private network with the real servers. These fields do not apply to the direct routing network type. NAT Router IP Enter the private floating IP in this text field. This floating IP should be used as the gateway for the real servers. NAT Router netmask If the NAT router's floating IP needs a particular netmask, select it from drop-down list. NAT Router device Use this text field to define the device name of the network interface for the floating IP address, such as eth1:1 . Note You should alias the NAT floating IP address to the Ethernet interface connected to the private network. In this example, the private network is on the eth1 interface, so eth1:1 is the floating IP address. Warning After completing this page, click the ACCEPT button to make sure you do not lose any changes when selecting a new panel.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-piranha-globalset-vsa
Chapter 6. Mirroring data for hybrid and Multicloud buckets
Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim .
[ "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/mirroring-data-for-hybrid-and-multicloud-buckets
probe::socket.readv
probe::socket.readv Name probe::socket.readv - Receiving a message via sock_readv Synopsis socket.readv Values state Socket state value family Protocol family value protocol Protocol value name Name of this probe type Socket type value size Message size in bytes flags Socket flags value Context The message sender Description Fires at the beginning of receiving a message on a socket via the sock_readv function
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-readv
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes Bring-Your-Own-Host (BYOH) allows for users to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users looking to mitigate major disruptions in the event that a Windows server goes offline. 9.1. Configuring a BYOH Windows instance Creating a BYOH Windows instance requires creating a config map in the Windows Machine Config Operator (WMCO) namespace. Prerequisites Any Windows instances that are to be attached to the cluster as a node must fulfill the following requirements: The instance must be on the same network as the Linux worker nodes in the cluster. Port 22 must be open and running an SSH server. The default shell for the SSH server must be the Windows Command shell , or cmd.exe . Port 10250 must be open for log collection. An administrator user is present with the private key used in the secret set as an authorized SSH key. If you are creating a BYOH Windows instance for an installer-provisioned infrastructure (IPI) AWS cluster, you must add a tag to the AWS instance that matches the spec.template.spec.value.tag value in the compute machine set for your worker nodes. For example, kubernetes.io/cluster/<cluster_id>: owned or kubernetes.io/cluster/<cluster_id>: shared . If you are creating a BYOH Windows instance on vSphere, communication with the internal API server must be enabled. The hostname of the instance must follow the RFC 1123 DNS label requirements, which include the following standards: Contains only lowercase alphanumeric characters or '-'. Starts with an alphanumeric character. Ends with an alphanumeric character. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because the WMCO installs and manages the runtime, it is recommended that you not manually install containerd on nodes. Procedure Create a ConfigMap named windows-instances in the WMCO namespace that describes the Windows instances to be added. Note Format each entry in the config map's data section by using the address as the key while formatting the value as username=<username> . Example config map kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core 1 The address that the WMCO uses to reach the instance over SSH, either a DNS name or an IPv4 address. A DNS PTR record must exist for this address. It is recommended that you use a DNS name with your BYOH instance if your organization uses DHCP to assign IP addresses. If not, you need to update the windows-instances ConfigMap whenever the instance is assigned a new IP address. 2 The name of the administrator user created in the prerequisites. 9.2. Removing BYOH Windows instances You can remove BYOH instances attached to the cluster by deleting the instance's entry in the config map. Deleting an instance reverts that instance back to its state prior to adding to the cluster. Any logs and container runtime artifacts are not added to these instances. For an instance to be cleanly removed, it must be accessible with the current private key provided to WMCO. For example, to remove the 10.1.42.1 instance from the example, the config map would be changed to the following: kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core Deleting windows-instances is viewed as a request to deconstruct all Windows instances added as nodes.
[ "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/windows_container_support_for_openshift/byoh-windows-instance
8.32. coreutils
8.32. coreutils 8.32.1. RHBA-2014:1457 - coreutils bug fix and enhancement update Updated coreutils packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The coreutils packages contain the core GNU utilities. It is a combination of the old GNU fileutils, sh-utils, and textutils packages. Bug Fixes BZ# 812449 Previously, the "df" command did not display target device information when a symbolic link was specified as a parameter. Consequently, the information about the file was shown instead of the information about the device. This update applies a patch to fix this bug and the "df" command works as expected in the described scenario. BZ# 1016163 When no user was specified, the "id -G" and "groups" commands printed the default group ID listed in the password database. Consequently, the IDs were in certain cases ineffective or incorrect. With this update, the commands have been enhanced to print only proper IDs, thus showing correct information about the groups. BZ# 1046818 A update of the coreutils packages fixed the tail utility to handle symbolic links correctly. However, due to this update, tail returned unnecessary warnings about reverting to polling. This update provides a patch to fix this bug and the warning is only shown when necessary. BZ# 1057026 A recent update of the coreutils packages changed the format of the output from the "df" and "df -k" commands to one line per entry, which is required for POSIX mode. As a consequence, scripts relying on the two lines per entry format started to fail. To fix this bug, two-line entries have been reintroduced to the output for modes other than POSIX. As a result, scripts relying on the two-line format no longer fail. BZ# 1063887 A recent update of the coreutils packages caused a regression in the signal handling in the su utility. As a consequence, when the SIGTERM signal was received, a parent process was killed instead of the su process. With this update, handling of the SIGTERM signal has been fixed and su no longer kills the parent process upon receiving the termination signal. BZ# 1064621 The chcon(1) manual page did not describe the default behavior when dereferencing symbolic links; the "--dereference" option was not documented. This update adds the appropriate information to the manual page. BZ# 1075679 Certain file systems, for example XFS, have special features such as speculative preallocation of memory holes. These features could cause a failure of the "dd" command test in the upstream test suite. As a consequence, the coreutils package source rpm could not be rebuilt on XFS file systems. To address this bug, the test has been improved to prevent the failures in the described scenario. BZ# 1104244 The "tail --follow" command uses the inotify API to follow the changes in a file. However, inotify does not work on remote file systems and the tail utility is supposed to fall back to polling for files on such file systems. Previously, the Veritas file system was missing from the remote file system list and therefore, "tail --follow" did not display the updates to the file on this file system. The Veritas file system has been added to the remote file system list and the problem no longer occurs. In addition, this update adds the following Enhancement BZ# 1098078 This update enhances the "dd" command to support the count_bytes input flag. When the flag is specified, the count is treated as numbers of bytes rather than blocks. This feature is useful for example when copying virtual disk images. Users of coreutils are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/coreutils
Introduction to OpenShift Dedicated
Introduction to OpenShift Dedicated OpenShift Dedicated 4 An overview of OpenShift Dedicated architecture Red Hat OpenShift Documentation Team
[ "oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c \"/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'\"", "spec: nodeSelector: role: worker", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/introduction_to_openshift_dedicated/index
Installation overview
Installation overview OpenShift Container Platform 4.17 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m", "capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage", "oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml", "oc get deployment -n openshift-ingress", "oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'", "map[cidr:10.128.0.0/14 hostPrefix:23]", "oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'", "{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}", "oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1", "oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'", "[\"openshift-samples\"]", "oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'", "oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'", "{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}", "oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=openshift-install-fips --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}", "tar -xvf openshift-install-rhel9-amd64.tar.gz" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installation_overview/index
9.4. Hosts and Networking
9.4. Hosts and Networking 9.4.1. Refreshing Host Capabilities When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager. Refreshing Host Capabilities Click Compute Hosts and select a host. Click Management Refresh Capabilities . The list of network interface cards in the Network Interfaces tab for the selected host is updated. Any new network interface cards can now be used in the Manager. 9.4.2. Editing Host Network Interfaces and Assigning Logical Networks to Hosts You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported. Warning The only way to change the IP address of a host in Red Hat Virtualization is to remove the host and then to add it again. To change the VLAN settings of a host, see Section 9.4.4, "Editing a Host's VLAN Settings" . Important You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines. Note If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port's current configuration. This can help to prevent incorrect configuration. Red Hat recommends checking the following information prior to assigning logical networks: Port Description (TLV type 4) and System Name (TLV type 5) help to detect to which ports and on which switch the host's interfaces are patched. Port VLAN ID shows the native VLAN ID configured on the switch port for untagged ethernet frames. All VLANs configured on the switch port are shown as VLAN Name and VLAN ID combinations. Editing Host Network Interfaces and Assigning Logical Networks to Hosts Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Optionally, hover your cursor over host network interface to view configuration information provided by the switch. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area to the physical host network interface. Note If a NIC is connected to more than one logical network, only one of the networks can be non-VLAN. All the other logical networks must be unique VLANs. Configure the logical network: Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window. From the IPv4 tab, select a Boot Protocol from None , DHCP , or Static . If you selected Static , enter the IP , Netmask / Routing Prefix , and the Gateway . Note For IPv6, only static IPv6 addressing is supported. To configure the logical network, select the IPv6 tab and make the following entries: Set Boot Protocol to Static . For Routing Prefix , enter the length of the prefix using a forward slash and decimals. For example: /48 IP : The complete IPv6 address of the host network interface. For example: 2001:db8::1:0:0:6 Gateway : The source router's IPv6 address. For example: 2001:db8::1:0:0:1 Note If you change the host's management network IP address, you must reinstall the host for the new IP address to be configured. Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network's gateway instead of the default gateway used by the management network. Important Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported. Use the QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields: Weighted Share : Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100. Rate Limit [Mbps] : The maximum bandwidth to be used by a network. Committed Rate [Mbps] : The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link. To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key = value . Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Section B.1, "Explanation of bridge_opts Parameters" . To configure ethernet properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example: This field can accept wildcards. For example, to apply the same option to all of this network's interfaces, use: The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See Section B.2, "How to Set Up Red Hat Virtualization Manager to Use Ethtool" for more information. For more information on ethtool properties, see the manual page by typing man ethtool in the command line. To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key = value . At least enable=yes is required. You can also add dcb= and auto_vlan= [yes|no] . Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See Section B.3, "How to Set Up Red Hat Virtualization Manager to Use FCoE" for more information. Note A separate, dedicated logical network is recommended for use with FCoE. To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network's default route. See Section 9.1.5, "Configuring a Non-Management Logical Network as the Default Route" for more information. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. For more information about unsynchronized hosts and how to synchronize them, see Section 9.4.3, "Synchronizing Host Networks" . Select the Verify connectivity between Host and Engine check box to check network connectivity. This action only works if the host is in maintenance mode. Click OK . Note If not all network interface cards for the host are displayed, click Management Refresh Capabilities to update the list of network interface cards available for that host. 9.4.3. Synchronizing Host Networks The Manager defines a network interface as out-of-sync when the definition of the interface on the host differs from the definitions stored by the Manager. Out-of-sync networks appear with an Out-of-sync icon in the host's Network Interfaces tab and with this icon in the Setup Host Networks window. When a host's network is out of sync, the only activities that you can perform on the unsynchronized network in the Setup Host Networks window are detaching the logical network from the network interface or synchronizing the network. Understanding How a Host Becomes out-of-sync A host will become out of sync if: You make configuration changes on the host rather than using the the Edit Logical Networks window, for example: Changing the VLAN identifier on the physical host. Changing the Custom MTU on the physical host. You move a host to a different data center with the same network name, but with different values/parameters. You change a network's VM Network property by manually removing the bridge from the host. Preventing Hosts from Becoming Unsynchronized Following these best practices will prevent your host from becoming unsynchronized: Use the Administration Portal to make changes rather than making changes locally on the host. Edit VLAN settings according to the instructions in Section 9.4.4, "Editing a Host's VLAN Settings" . Synchronizing Hosts Synchronizing a host's network interface definitions involves using the definitions from the Manager and applying them to the host. If these are not the definitions that you require, after synchronizing your hosts update their definitions from the Administration Portal. You can synchronize a host's networks on three levels: Per logical network Per host Per cluster Synchronizing Host Networks on the Logical Network Level Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Hover your cursor over the unsynchronized network and click the pencil icon to open the Edit Network window. Select the Sync network check box. Click OK to save the network change. Click OK to close the Setup Host Networks window. Synchronizing a Host's Networks on the Host level Click the Sync All Networks button in the host's Network Interfaces tab to synchronize all of the host's unsynchronized network interfaces. Synchronizing a Host's Networks on the Cluster level Click the Sync All Networks button in the cluster's Logical Networks tab to synchronize all unsynchronized logical network definitions for the entire cluster. Note You can also synchronize a host's networks via the REST API. See syncallnetworks in the REST API Guide . 9.4.4. Editing a Host's VLAN Settings To change the VLAN settings of a host, the host must be removed from the Manager, reconfigured, and re-added to the Manager. To keep networking synchronized, do the following: Put the host in maintenance mode. Manually remove the management network from the host. This will make the host reachable over the new VLAN. Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely. The following warning message appears when the VLAN ID of the management network is changed: Proceeding causes all of the hosts in the data center to lose connectivity to the Manager and causes the migration of hosts to the new management network to fail. The management network will be reported as "out-of-sync". Important If you change the management network's VLAN ID, you must reinstall the host to apply the new VLAN ID. 9.4.5. Adding Multiple VLANs to a Single Network Interface Using Logical Networks Multiple VLANs can be added to a single network interface to separate traffic on the one host. Important You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows. Adding Multiple VLANs to a Network Interface using Logical Networks Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Drag your VLAN-tagged logical networks into the Assigned Logical Networks area to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging. Edit the logical networks: Hover your cursor over an assigned logical network and click the pencil icon. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. Select a Boot Protocol : None DHCP Static Provide the IP and Subnet Mask . Click OK . Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode. Click OK . Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface. 9.4.6. Assigning Additional IPv4 Addresses to a Host Network A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC's configuration file (for example, /etc/sysconfig/network-scripts/ifcfg-eth01 ) is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC. The vdsm-hook-extra-ipv4-addrs hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see Appendix A, VDSM and Hooks . In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses. Assigning Additional IPv4 Addresses to a Host Network On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package is available by default on Red Hat Virtualization Hosts but needs to be installed on Red Hat Enterprise Linux hosts. On the Manager, run the following command to add the key: Restart the ovirt-engine service: In the Administration Portal, click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab and click Setup Host Networks . Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon. Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated. Click OK to close the Edit Network window. Click OK to close the Setup Host Networks window. The additional IP addresses will not be displayed in the Manager, but you can run the command ip addr show on the host to confirm that they have been added. 9.4.7. Adding Network Labels to Host Network Interfaces Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces. Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses. There are two methods of adding labels to a host network interface: Manually, in the Administration Portal Automatically, with the LLDP Labeler service Adding Network Labels in the Administration Portal Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Click Labels and right-click [New Label] . Select a physical network interface to label. Enter a name for the network label in the Label text field. Click OK . Adding Network Labels with the LLDP Labeler Service You can automate the process of assigning labels to host network interfaces in the configured list of clusters with the LLDP Labeler service. By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations. Prerequisites The interfaces must be connected to a Juniper switch. The Juniper switch must be configured to provide the Port VLAN using LLDP. Procedure Configure the username and password in /etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf : username - the username of the Manager administrator. The default is admin@internal . password - the password of the Manager administrator. The default is 123456 . Configure the LLDP Labeler service by updating the following values in etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf : clusters - a comma-separated list of clusters on which the service should run. Wildcards are supported. For example, Cluster* defines LLDP Labeler to run on all clusters starting with word Cluster . To run the service on all clusters in the data center, type * . The default is Def* . api_url - the full URL of the Manager's API. The default is https:// Manager_FQDN /ovirt-engine/api ca_file - the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. auto_bonding - enables LLDP Labeler's bonding capabilities. The default is true . auto_labeling - enables LLDP Labeler's labeling capabilities. The default is true . Optionally, you can configure the service to run at a different time interval by changing the value of OnUnitActiveSec in etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer . The default is 1h . Configure the service to start now and at boot by entering the following command: To invoke the service manually, enter the following command: You have added a network label to a host network interface. Newly created logical networks with the same label are automatically assigned to all host network interfaces with that label. Removing a label from a logical network automatically removes that logical network from all host network interfaces with that label. 9.4.8. Changing the FQDN of a Host Use the following procedure to change the fully qualified domain name of hosts. Updating the FQDN of a Host Place the host into maintenance mode so the virtual machines are live migrated to another host. See Section 10.5.15, "Moving a Host to Maintenance Mode" for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information. Click Remove , and click OK to remove the host from the Administration Portal. Use the hostnamectl tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide . Reboot the host. Re-register the host with the Manager. See Section 10.5.1, "Adding Standard Hosts to the Red Hat Virtualization Manager" for more information. 9.4.9. IPv6 Networking Support Red Hat Virtualization supports static IPv6 networking in most contexts. Note Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it. Limitations for IPv6 Only static IPv6 addressing is supported. Dynamic IPv6 addressing with DHCP or Stateless Address Autoconfiguration are not supported. Dual-stack addressing, IPv4 and IPv6, is not supported. OVN networking can be used with only IPv4 or IPv6. Switching clusters from IPv4 to IPv6 is not supported. Only a single gateway per host can be set for IPv6. If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network. The host and Manager should have the same IPv6 gateway. If the host and Manager are not on the same subnet, the Manager might lose connectivity with the host because the IPv6 gateway was removed. Using a glusterfs storage domain with an IPv6-addressed gluster server is not supported. 9.4.10. Setting Up and Configuring SR-IOV This topic summarizes the steps for setting up and configuring SR-IOV, with links out to topics that cover each step in detail. 9.4.10.1. Prerequisites Set up your hardware in accordance with the Hardware Considerations for Implementing SR-IOV 9.4.10.2. Set Up and Configure SR-IOV To set up and configure SR-IOV, complete the following tasks. Configuring the host for PCI passthrough Editing the virtual function configuration on a NIC . Enabling passthrough on a vNIC Profile . Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration . Notes The number of the 'passthrough' vNICs depends on the number of available virtual functions (VFs) on the host. For example, to run a virtual machine (VM) with three SR-IOV cards (vNICs), the host must have three or more VFs enabled. Hotplug and unplug are supported. Live migration is supported from RHV version 4.1 onward. To migrate a VM, the destination host must also have enough available VFs to receive the VM. During the migration, the VM releases a number of VFs on the source host and occupies the same number of VFs on the destination host. On the host, you will see a device, link, or ifcae like any other interface. That device disappears when it is attached to a VM, and reappears when it is released. Avoid attaching a host device directly to a VM for SR-IOV feature. To use a VF as a trunk port with several VLANs and configure the VLANs within the Guest, please see Cannot configure VLAN on SR-IOV VF interfaces inside the Virtual Machine . Here is an example of what the libvirt XML for the interface would look like: ---- <interface type='hostdev'> <mac address='00:1a:yy:xx:vv:xx'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/> </source> <alias name='ua-18400536-5688-4477-8471-be720e9efc68'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> ---- Troubleshooting The following example shows you how to get diagnostic information about the VFs attached to an interface. 9.4.10.3. Additional Resources How to configure SR-IOV passthrough for RHV VM? How to configure bonding with SR-IOV VF(Virtual Function) in RHV How to enable host device passthrough and SR-IOV to allow assigning dedicated virtual NICs to virtual machines in RHV
[ "forward_delay=1500 gc_timer=3765 group_addr=1:80:c2:0:0:0 group_fwd_mask=0x0 hash_elasticity=4 hash_max=512 hello_time=200 hello_timer=70 max_age=2000 multicast_last_member_count=2 multicast_last_member_interval=100 multicast_membership_interval=26000 multicast_querier=0 multicast_querier_interval=25500 multicast_query_interval=13000 multicast_query_response_interval=1000 multicast_query_use_ifaddr=0 multicast_router=1 multicast_snooping=1 multicast_startup_query_count=2 multicast_startup_query_interval=3125", "--coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half", "--coalesce * rx-usecs 14 sample-interval 3", "Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?", "yum install vdsm-hook-extra-ipv4-addrs", "engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*'", "systemctl restart ovirt-engine.service", "systemctl enable --now ovirt-lldp-labeler", "/usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py", "hostnamectl set-hostname NEW_FQDN", "---- <interface type='hostdev'> <mac address='00:1a:yy:xx:vv:xx'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/> </source> <alias name='ua-18400536-5688-4477-8471-be720e9efc68'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> ----", "ip -s link show dev enp5s0f0 1: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 86:e2:ba:c2:50:f0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 30931671 218401 0 0 0 19165434 TX: bytes packets errors dropped carrier collsns 997136 13661 0 0 0 0 vf 0 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off vf 1 MAC 00:1a:4b:16:01:5e, spoof checking on, link-state auto, trust off, query_rss off vf 2 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Hosts_and_Networking
Appendix B. List of Bugzillas by Component
Appendix B. List of Bugzillas by Component This appendix provides a list of all components and their related Bugzillas that are included in this book. Table B.1. List of Bugzillas by Component Component New Features Notable Bug Fixes Technology Previews Known Issues 389-ds-base BZ# 1560653 BZ# 1515190 , BZ# 1525256 , BZ# 1551071 , BZ# 1552698 , BZ# 1559945 , BZ# 1566444 , BZ# 1568462 , BZ# 1570033 , BZ# 1570649 , BZ# 1576485 , BZ# 1581737 , BZ# 1582092 , BZ# 1582747 , BZ# 1593807 , BZ# 1598478 , BZ# 1598718 , BZ# 1614501 NetworkManager BZ# 1414093 , BZ# 1487477 BZ# 1507864 OVMF BZ# 653382 anaconda BZ# 1562301 BZ# 1360223 , BZ# 1436304 , BZ# 1535781 , BZ# 1554271 , BZ# 1557485 , BZ# 1561662 , BZ# 1561930 audit BZ# 1559032 augeas BZ# 1544520 bind BZ# 1452091 , BZ# 1510008 binutils BZ# 1553842 , BZ# 1557346 clevis BZ# 1472435 cockpit BZ# 1568728 corosync BZ# 1413573 criu BZ# 1400230 custodia BZ# 1403214 device-mapper-multipath BZ# 1541116 , BZ# 1554516 , BZ# 1593459 BZ# 1498724 , BZ# 1526876 , BZ# 1544958 , BZ# 1584228 , BZ# 1610263 distribution BZ# 1567133 BZ# 1062656 dnf BZ# 1461652 dpdk BZ# 1578688 elfutils BZ# 1565775 fence-agents BZ# 1476401 firefox BZ# 1576289 firewalld BZ# 1477771 , BZ# 1554993 BZ# 1498923 freeradius BZ# 1489758 freetype BZ# 1576504 fwupd BZ# 1623466 gcc BZ# 1552021 gcc-libraries BZ# 1600265 gdb BZ# 1553104 BZ# 1347993 , BZ# 1578378 genwqe-tools BZ# 1521050 ghostscript BZ# 1551782 git BZ# 1213059 , BZ# 1284081 glibc BZ# 1448107 , BZ# 1461231 BZ# 1401665 gnome-shell BZ# 1481395 BZ# 1625700 gnutls BZ# 1561481 ima-evm-utils BZ# 1627278 BZ# 1384450 initscripts BZ# 1493069 , BZ# 1542514 , BZ# 1583677 BZ# 1554364 , BZ# 1554690 , BZ# 1559384 , BZ# 1572659 ipa BZ# 1115294 , BZ# 1298286 ipa-server-container BZ# 1405325 ipset BZ# 1440741 , BZ# 1557600 java-11-openjdk BZ# 1570856 jss BZ# 1557575 , BZ# 1560682 kernel BZ# 1205497 , BZ# 1305092 , BZ# 1322930 , BZ# 1344565 , BZ# 1350553 , BZ# 1451438 , BZ# 1457161 , BZ# 1471950 , BZ# 1496859 , BZ# 1507027 , BZ# 1511351 , BZ# 1515584 , BZ# 1520356 , BZ# 1557599 , BZ# 1570090 , BZ# 1584753 , BZ# 1620372 BZ# 1527799 , BZ# 1541250 , BZ# 1544920 , BZ# 1554907 , BZ# 1636930 BZ# 916382 , BZ# 1109348 , BZ# 1111712 , BZ# 1206277 , BZ# 1230959 , BZ# 1274459 , BZ# 1299662 , BZ# 1348508 , BZ# 1387768 , BZ# 1393375 , BZ# 1414957 , BZ# 1457533 , BZ# 1460849 , BZ# 1503123 , BZ# 1519746 , BZ# 1589397 BZ# 1428549 , BZ# 1520302 , BZ# 1528466 , BZ# 1608704 , BZ# 1615210 , BZ# 1622413 , BZ# 1623150 , BZ# 1627563 , BZ# 1632575 kernel-alt BZ# 1615370 kernel-rt BZ# 1297061 , BZ# 1553351 BZ# 1608672 kexec-tools BZ# 1352763 ksh BZ# 1503922 libcgroup BZ# 1549175 libguestfs BZ# 1541908 , BZ# 1557273 BZ# 1387213 , BZ# 1441197 , BZ# 1477912 libnftnl BZ# 1332585 libpciaccess BZ# 1641044 libreswan BZ# 1536404 , BZ# 1591817 BZ# 1375750 libsepol BZ# 1564775 libstoragemgmt BZ# 1119909 libusnic_verbs BZ# 916384 libvirt BZ# 1447169 BZ# 1283251 , BZ# 1475770 linuxptp BZ# 1549015 lorax-composer BZ# 1642156 lvm2 BZ# 1337220 , BZ# 1643651 man-db BZ# 1515352 mutter BZ# 1579257 nautilus BZ# 1600163 ndctl BZ# 1635441 net-snmp BZ# 1533943 nftables BZ# 1571968 nmap BZ# 1546246 , BZ# 1573411 nss BZ# 1425514 , BZ# 1431210 , BZ# 1432142 nuxwdog BZ# 1615617 opal-prd BZ# 1564097 openjpeg BZ# 1553235 opensc BZ# 1547117 , BZ# 1562277 , BZ# 1562572 openscap BZ# 1556988 BZ# 1548949 , BZ# 1603347 , BZ# 1640522 openssl BZ# 1519396 openssl-ibmca BZ# 1519395 oscap-anaconda-addon BZ# 1636847 other BZ# 1432080 , BZ# 1609302 , BZ# 1612965 , BZ# 1627126 , BZ# 1649493 BZ# 1062759 , BZ# 1259547 , BZ# 1464377 , BZ# 1477977 , BZ# 1559615 , BZ# 1613966 BZ# 1569484 , BZ# 1571754 , BZ# 1611665 , BZ# 1633185 , BZ# 1635135 , BZ# 1647485 pacemaker BZ# 1590483 pam_pkcs11 BZ# 1578029 pcp BZ# 1565370 pcs BZ# 1427273 , BZ# 1475318 BZ# 1566382 , BZ# 1572886 , BZ# 1588667 , BZ# 1590533 BZ# 1433016 pcsc-lite BZ# 1516993 pcsc-lite-ccid BZ# 1558258 perl BZ# 1557574 perl-LDAP BZ# 1520364 pki-core BZ# 1550742 , BZ# 1550786 , BZ# 1557569 , BZ# 1562423 , BZ# 1585866 BZ# 1546708 , BZ# 1549632 , BZ# 1568615 , BZ# 1580394 powerpc-utils BZ# 1540067 , BZ# 1592429 , BZ# 1596121 , BZ# 1628907 procps-ng BZ# 1518986 BZ# 1507356 qemu-guest-agent BZ# 1569013 qemu-kvm BZ# 1103193 radvd BZ# 1475983 rear BZ# 1418459 , BZ# 1496518 BZ# 1685166 resource-agents BZ# 1470840 , BZ# 1538689 , BZ# 1568588 , BZ# 1568589 BZ# 1513957 rhel-system-roles BZ# 1479381 BZ# 1439896 rpm BZ# 1395818 , BZ# 1555326 rsyslog BZ# 1482819 , BZ# 1531295 , BZ# 1539193 BZ# 1553700 rt-setup BZ# 1616038 samba BZ# 1558560 sane-backends BZ# 1512252 scap-security-guide BZ# 1443551 , BZ# 1619689 BZ# 1631378 scap-workbench BZ# 1533108 selinux-policy BZ# 1443473 , BZ# 1460322 sos-collector BZ# 1481861 sssd BZ# 1416528 BZ# 1068725 strongimcv BZ# 755087 subscription-manager BZ# 1576423 sudo BZ# 1533964 , BZ# 1547974 , BZ# 1548380 BZ# 1560657 systemd BZ# 1284974 systemtap BZ# 1565773 tss2 BZ# 1384452 tuned BZ# 1546598 BZ# 1649408 usbguard BZ# 1508878 BZ# 1480100 vdo BZ# 1617896 vsftpd BZ# 1479237 wayland BZ# 1481411 wpa_supplicant BZ# 1434434 , BZ# 1505404 xorg-x11-drv-nouveau BZ# 1624337 xorg-x11-drv-qxl BZ# 1640918 xorg-x11-server BZ# 1564632 BZ# 1624847 ypserv BZ# 1492892 yum BZ# 1481220 BZ# 1528608 yum-utils BZ# 1497351 , BZ# 1506205
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/appe-list-of-bugzillas-by-component
function::backtrace
function::backtrace Name function::backtrace - Hex backtrace of current kernel stack Synopsis Arguments None Description This function returns a string of hex addresses that are a backtrace of the kernel stack. Output may be truncated as per maximum string length (MAXSTRINGLEN). See ubacktrace for user-space backtrace.
[ "backtrace:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-backtrace
Appendix F. Object Storage Daemon (OSD) configuration options
Appendix F. Object Storage Daemon (OSD) configuration options The following are Ceph Object Storage Daemon (OSD) configuration options that can be set during deployment. osd_uuid Description The universally unique identifier (UUID) for the Ceph OSD. Type UUID Default The UUID. Note The osd uuid applies to a single Ceph OSD. The fsid applies to the entire cluster. osd_data Description The path to the OSD's data. You must create the directory when deploying Ceph. Mount a drive for OSD data at this mount point. Type String Default /var/lib/ceph/osd/USDcluster-USDid osd_max_write_size Description The maximum size of a write in megabytes. Type 32-bit Integer Default 90 osd_client_message_size_cap Description The largest client data message allowed in memory. Type 64-bit Integer Unsigned Default 500MB default. 500*1024L*1024L osd_class_dir Description The class path for RADOS class plug-ins. Type String Default USDlibdir/rados-classes osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD. Type 32-bit Int Default 1 osd_scrub_thread_timeout Description The maximum time in seconds before timing out a scrub thread. Type 32-bit Integer Default 60 osd_scrub_finalize_thread_timeout Description The maximum time in seconds before timing out a scrub finalize thread. Type 32-bit Integer Default 60*10 osd_scrub_begin_hour Description The earliest hour that light or deep scrubbing can begin. It is used with the osd scrub end hour parameter to define a scrubbing time window and allows constraining scrubbing to off-peak hours. The setting takes an integer to specify the hour on the 24-hour cycle where 0 represents the hour from 12:01 a.m. to 1:00 a.m., 13 represents the hour from 1:01 p.m. to 2:00 p.m., and so on. Type 32-bit Integer Default 0 for 12:01 to 1:00 a.m. osd_scrub_end_hour Description The latest hour that light or deep scrubbing can begin. It is used with the osd scrub begin hour parameter to define a scrubbing time window and allows constraining scrubbing to off-peak hours. The setting takes an integer to specify the hour on the 24-hour cycle where 0 represents the hour from 12:01 a.m. to 1:00 a.m., 13 represents the hour from 1:01 p.m. to 2:00 p.m., and so on. The end hour must be greater than the begin hour. Type 32-bit Integer Default 24 for 11:01 p.m. to 12:00 a.m. osd_scrub_load_threshold Description The maximum load. Ceph will not scrub when the system load (as defined by the getloadavg() function) is higher than this number. Default is 0.5 . Type Float Default 0.5 osd_scrub_min_interval Description The minimum interval in seconds for scrubbing the Ceph OSD when the Red Hat Ceph Storage cluster load is low. Type Float Default Once per day. 60*60*24 osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD irrespective of cluster load. Type Float Default Once per week. 7*60*60*24 osd_scrub_interval_randomize_ratio Description Takes the ratio and randomizes the scheduled scrub between osd scrub min interval and osd scrub max interval . Type Float Default 0.5 . mon_warn_not_scrubbed Description Number of seconds after osd_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning). osd_scrub_chunk_min Description The object store is partitioned into chunks which end on hash boundaries. For chunky scrubs, Ceph scrubs objects one chunk at a time with writes blocked for that chunk. The osd scrub chunk min setting represents minimum number of chunks to scrub. Type 32-bit Integer Default 5 osd_scrub_chunk_max Description The maximum number of chunks to scrub. Type 32-bit Integer Default 25 osd_scrub_sleep Description The time to sleep between deep scrub operations. Type Float Default 0 (or off). osd_scrub_during_recovery Description Allows scrubbing during recovery. Type Bool Default false osd_scrub_invalid_stats Description Forces extra scrub to fix stats marked as invalid. Type Bool Default true osd_scrub_priority Description Controls queue priority of scrub operations versus client I/O. Type Unsigned 32-bit Integer Default 5 osd_scrub_cost Description Cost of scrub operations in megabytes for queue scheduling purposes. Type Unsigned 32-bit Integer Default 50 << 20 osd_deep_scrub_interval Description The interval for deep scrubbing, that is fully reading all data. The osd scrub load threshold parameter does not affect this setting. Type Float Default Once per week. 60*60*24*7 osd_deep_scrub_stride Description Read size when doing a deep scrub. Type 32-bit Integer Default 512 KB. 524288 mon_warn_not_deep_scrubbed Description Number of seconds after osd_deep_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning). osd_deep_scrub_randomize_ratio Description The rate at which scrubs will randomly become deep scrubs (even before osd_deep_scrub_interval has past). Type Float Default 0.15 or 15%. osd_deep_scrub_update_digest_min_age Description How many seconds old objects must be before scrub updates the whole-object digest. Type Integer Default 120 (2 hours). osd_op_num_shards Description The number of shards for client operations. Type 32-bit Integer Default 0 osd_op_num_threads_per_shard Description The number of threads per shard for client operations. Type 32-bit Integer Default 0 osd_op_num_shards_hdd Description The number of shards for HDD operations. Type 32-bit Integer Default 5 osd_op_num_threads_per_shard_hdd Description The number of threads per shard for HDD operations. Type 32-bit Integer Default 1 osd_op_num_shards_ssd Description The number of shards for SSD operations. Type 32-bit Integer Default 8 osd_op_num_threads_per_shard_ssd Description The number of threads per shard for SSD operations. Type 32-bit Integer Default 2 osd_client_op_priority Description The priority set for client operations. It is relative to osd recovery op priority . Type 32-bit Integer Default 63 Valid Range 1-63 osd_recovery_op_priority Description The priority set for recovery operations. It is relative to osd client op priority . Type 32-bit Integer Default 3 Valid Range 1-63 osd_op_thread_timeout Description The Ceph OSD operation thread timeout in seconds. Type 32-bit Integer Default 30 osd_op_complaint_time Description An operation becomes complaint worthy after the specified number of seconds have elapsed. Type Float Default 30 osd_disk_threads Description The number of disk threads, which are used to perform background disk intensive OSD operations such as scrubbing and snap trimming. Type 32-bit Integer Default 1 osd_op_history_size Description The maximum number of completed operations to track. Type 32-bit Unsigned Integer Default 20 osd_op_history_duration Description The oldest completed operation to track. Type 32-bit Unsigned Integer Default 600 osd_op_log_threshold Description How many operations logs to display at once. Type 32-bit Integer Default 5 osd_op_timeout Description The time in seconds after which running OSD operations time out. Type Integer Default 0 Important Do not set the osd op timeout option unless your clients can handle the consequences. For example, setting this parameter on clients running in virtual machines can lead to data corruption because the virtual machines interpret this timeout as a hardware failure. osd_max_backfills Description The maximum number of backfill operations allowed to or from a single OSD. Type 64-bit Unsigned Integer Default 1 osd_backfill_scan_min Description The minimum number of objects per backfill scan. Type 32-bit Integer Default 64 osd_backfill_scan_max Description The maximum number of objects per backfill scan. Type 32-bit Integer Default 512 osd_backfillfull_ratio Description Refuse to accept backfill requests when the Ceph OSD's full ratio is above this value. Type Float Default 0.85 osd_backfill_retry_interval Description The number of seconds to wait before retrying backfill requests. Type Double Default 10.0 osd_map_dedup Description Enable removing duplicates in the OSD map. Type Boolean Default true osd_map_cache_size Description The size of the OSD map cache in megabytes. Type 32-bit Integer Default 50 osd_map_cache_bl_size Description The size of the in-memory OSD map cache in OSD daemons. Type 32-bit Integer Default 50 osd_map_cache_bl_inc_size Description The size of the in-memory OSD map cache incrementals in OSD daemons. Type 32-bit Integer Default 100 osd_map_message_max Description The maximum map entries allowed per MOSDMap message. Type 32-bit Integer Default 40 osd_snap_trim_thread_timeout Description The maximum time in seconds before timing out a snap trim thread. Type 32-bit Integer Default 60*60*1 osd_pg_max_concurrent_snap_trims Description The max number of parallel snap trims/PG. This controls how many objects per PG to trim at once. Type 32-bit Integer Default 2 osd_snap_trim_sleep Description Insert a sleep between every trim operation a PG issues. Type 32-bit Integer Default 0 osd_max_trimming_pgs Description The max number of trimming PGs Type 32-bit Integer Default 2 osd_backlog_thread_timeout Description The maximum time in seconds before timing out a backlog thread. Type 32-bit Integer Default 60*60*1 osd_default_notify_timeout Description The OSD default notification timeout (in seconds). Type 32-bit Integer Unsigned Default 30 osd_check_for_log_corruption Description Check log files for corruption. Can be computationally expensive. Type Boolean Default false osd_remove_thread_timeout Description The maximum time in seconds before timing out a remove OSD thread. Type 32-bit Integer Default 60*60 osd_command_thread_timeout Description The maximum time in seconds before timing out a command thread. Type 32-bit Integer Default 10*60 osd_command_max_records Description Limits the number of lost objects to return. Type 32-bit Integer Default 256 osd_auto_upgrade_tmap Description Uses tmap for omap on old objects. Type Boolean Default true osd_tmapput_sets_users_tmap Description Uses tmap for debugging only. Type Boolean Default false osd_preserve_trimmed_log Description Preserves trimmed log files, but uses more disk space. Type Boolean Default false osd_recovery_delay_start Description After peering completes, Ceph delays for the specified number of seconds before starting to recover objects. Type Float Default 0 osd_recovery_max_active Description The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster. Type 32-bit Integer Default 3 osd_recovery_max_chunk Description The maximum size of a recovered chunk of data to push. Type 64-bit Integer Unsigned Default 8 << 20 osd_recovery_threads Description The number of threads for recovering data. Type 32-bit Integer Default 1 osd_recovery_thread_timeout Description The maximum time in seconds before timing out a recovery thread. Type 32-bit Integer Default 30 osd_recover_clone_overlap Description Preserves clone overlap during recovery. Should always be set to true . Type Boolean Default true rados_osd_op_timeout Description Number of seconds that RADOS waits for a response from the OSD before returning an error from a RADOS operation. A value of 0 means no limit. Type Double Default 0
[ "IMPORTANT: Red Hat does not recommend changing the default." ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/osd-object-storage-daemon-configuration-options_conf
Chapter 7. Automating API lifecycle with 3scale API Management toolbox
Chapter 7. Automating API lifecycle with 3scale API Management toolbox This topic explains the concepts of the API lifecycle with Red Hat 3scale API Management and shows how API providers can automate the deployment stage using Jenkins Continuous Integration/Continuous Deployment (CI/CD) pipelines with 3scale toolbox commands. It describes how to deploy the sample Jenkins CI/CD pipelines, how to create a custom Jenkins pipeline using the 3scale shared library, and how to create a custom pipeline from scratch: Section 7.1, "Overview of the API lifecycle stages" Section 7.2, "Deploying the sample Jenkins CI/CD pipelines" Section 7.3, "Creating pipelines using the 3scale API Management Jenkins shared library" Section 7.4, "Creating pipelines using a Jenkinsfile" 7.1. Overview of the API lifecycle stages The API lifecycle describes all the required activities from when an API is created until it is deprecated. 3scale enables API providers to perform full API lifecycle management. This section explains each stage in the API lifecycle and describes its goal and expected outcome. The following diagram shows the API provider-based stages on the left, and the API consumer-based stages on the right: Note Red Hat currently supports the design, implement, deploy, secure, and manage phases of the API provider cycle, and all phases of the API consumer cycle. 7.1.1. API provider cycle The API provider cycle stages are based on specifying, developing, and deploying your APIs. The following describes the goal and outcome of each stage: Table 7.1. API provider lifecycle stages Stage Goal Outcome 1. Strategy Determine the corporate strategy for the APIs, including goals, resources, target market, timeframe, and make a plan. The corporate strategy is defined with a clear plan to achieve the goals. 2. Design Create the API contract early to break dependencies between projects, gather feedback, and reduce risks and time to market (for example, using Apicurio Studio). A consumer-focused API contract defines the messages that can be exchanged with the API. The API consumers have provided feedback. 3. Mock Further specify the API contract with real-world examples and payloads that can be used by API consumers to start their implementation. A mock API is live and returns real-world examples. The API contract is complete with examples. 4. Test Further specify the API contract with business expectations that can be used to test the developed API. A set of acceptance tests is created. The API documentation is complete with business expectations. 5. Implement Implement the API, using an integration framework such as Red Hat Fuse or a development language of your choice. Ensure that the implementation matches the API contract. The API is implemented. If custom API management features are required, 3scale APIcast policies are also developed. 6. Deploy Automate the API integration, tests, deployment, and management using a CI/CD pipeline with 3scale toolbox. A CI/CD pipeline integrates, tests, deploys, and manages the API to the production environment in an automated way. 7. Secure Ensure that the API is secure (for example, using secure development practices and automated security testing). Security guidelines, processes, and gates are in place. 8. Manage Manage API promotion between environments, versioning, deprecation, and retirement at scale. Processes and tools are in place to manage APIs at scale (for example, semantic versioning to prevent breaking changes to the API). 7.1.2. API consumer cycle The API consumer cycle stages are based on promoting, distributing, and refining your APIs for consumption. The following describes the goal and outcome of each stage: Table 7.2. API consumer lifecycle stages Stage Goal Outcome 9. Discover Promote the API to third-party developers, partners, and internal users. A developer portal is live and up-to-date documentation is continuously pushed to this developer portal (for example, using 3scale ActiveDocs). 10. Develop Guide and enable third-party developers, partners, and internal users to develop applications based on the API. The developer portal includes best practices, guides, and recommendations. API developers have access to a mock and test endpoint to develop their software. 11. Consume Handle the growing API consumption and manage the API consumers at scale. Staged application plans are available for consumption, and up-to-date prices and limits are continuously pushed. API consumers can integrate API key or client ID/secret generation from their CI/CD pipeline. 12. Monitor Gather factual and quantified feedback about API health, quality, and developer engagement (for example, a metric for Time to first Hello World!). A monitoring system is in place. Dashboards show KPIs for the API (for example, uptime, requests per minute, latency, and so on). 13. Monetize Drive new incomes at scale (this stage is optional). For example, when targeting a large number of small API consumers, monetization is enabled and consumers are billed based on usage in an automated way. 7.2. Deploying the sample Jenkins CI/CD pipelines API lifecycle automation with 3scale toolbox focuses on the deployment stage of the API lifecycle and enables you to use CI/CD pipelines to automate your API management solution. This topic explains how to deploy the sample Jenkins pipelines that call the 3scale toolbox: Section 7.2.1, "Sample Jenkins CI/CD pipelines" Section 7.2.2, "Setting up your 3scale API Management Hosted environment" Section 7.2.3, "Setting up your 3scale API Management On-premises environment" Section 7.2.4, "Deploying Red Hat single sign-on and Red Hat build of Keycloak for OpenID Connect" Section 7.2.5, "Installing the 3scale API Management toolbox and enabling access" Section 7.2.6, "Deploying the API backends" Section 7.2.7, "Deploying self-managed APIcast instances" Section 7.2.8, "Installing and deploying the sample pipelines" Section 7.2.9, "Limitations of API lifecycle automation with 3scale API Management toolbox" 7.2.1. Sample Jenkins CI/CD pipelines The following samples are provided in the Red Hat Integration repository as examples of how to create and deploy your Jenkins pipelines for API lifecycle automation: Table 7.3. Sample Jenkins shared library pipelines Sample pipeline Target environment Security SaaS - API key 3scale Hosted API key Hybrid - open 3scale Hosted and 3scale On-premises with APIcast self-managed None Hybrid - OpenID Connect 3scale Hosted and 3scale On-premises with APIcast self-managed OpenID Connect (OIDC) Multi-environment 3scale Hosted on development, test and production, with APIcast self-managed API key Semantic versioning 3scale Hosted on development, test and production, with APIcast self-managed API key, none, OIDC These samples use a 3scale Jenkins shared library that calls the 3scale toolbox to demonstrate key API management capabilities. After you have performed the setup steps in this topic, you can install the pipelines using the OpenShift templates provided for each of the sample use cases in the Red Hat Integration repository . Important The sample pipelines and applications are provided as examples only. The underlying APIs, CLIs, and other interfaces leveraged by the sample pipelines are fully supported by Red Hat. Any modifications that you make to the pipelines are not directly supported by Red Hat. 7.2.2. Setting up your 3scale API Management Hosted environment Setting up a 3scale Hosted environment is required by all of the sample Jenkins CI/CD pipelines. Note The SaaS - API key , Multi-environment , and Semantic versioning sample pipelines use 3scale Hosted only. The Hybrid - open and Hybrid - OIDC pipelines also use 3scale On-premises. See also Setting up your 3scale On-premises environment . Prerequisites You must have a Linux workstation. You must have a 3scale Hosted environment. You must have an OpenShift 3.11 cluster. OpenShift 4 is currently not supported. For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page. Ensure that wildcard routes have been enabled on the OpenShift router, as explained in the OpenShift documentation . Procedure Log in to your 3scale Hosted Admin Portal console. Generate a new access token with write access to the Account Management API. Save the generated access token for later use. For example: USD export SAAS_ACCESS_TOKEN=123...456 Save the name of your 3scale tenant for later use. This is the string before -admin.3scale.net in your Admin Portal URL. For example: USD export SAAS_TENANT=my_username Navigate to Audience > Accounts > Listing in the Admin Portal. Click Developer . Save the Developer Account ID . This is the last part of the URL after /buyers/accounts/ . For example: USD export SAAS_DEVELOPER_ACCOUNT_ID=123...456 7.2.3. Setting up your 3scale API Management On-premises environment Setting up a 3scale on-premises environment is required by the Hybrid - open and Hybrid - OIDC sample Jenkins CI/CD pipelines only. Note If you wish to use these Hybrid sample pipelines, you must set up a 3scale On-premises environment and a 3scale Hosted environment. See also Setting up your 3scale API Management Hosted environment . Prerequisites You must have a Linux workstation. You must have a 3scale on-premises environment. For details on installing 3scale on-premises using a template on OpenShift, see the 3scale API Management installation documentation . You must have an OpenShift 4.x cluster. For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page. Ensure that wildcard routes have been enabled on the OpenShift router, as explained in the OpenShift documentation . Procedure Log in to your 3scale On-premises Admin Portal console. Generate a new access token with write access to the Account Management API. Save the generated access token for later use. For example: USD export SAAS_ACCESS_TOKEN=123...456 Save the name of your 3scale tenant for later use: USD export ONPREM_ADMIN_PORTAL_HOSTNAME="USD(oc get route system-provider-admin -o jsonpath='{.spec.host}')" Define your wildcard routes: USD export OPENSHIFT_ROUTER_SUFFIX=app.openshift.test # Replace me! USD export APICAST_ONPREM_STAGING_WILDCARD_DOMAIN=onprem-staging.USDOPENSHIFT_ROUTER_SUFFIX USD export APICAST_ONPREM_PRODUCTION_WILDCARD_DOMAIN=onprem-production.USDOPENSHIFT_ROUTER_SUFFIX Note You must set the value of OPENSHIFT_ROUTER_SUFFIX to the suffix of your OpenShift router (for example, app.openshift.test ). Add the wildcard routes to your existing 3scale on-premises instance: USD oc create route edge apicast-wildcard-staging --service=apicast-staging --hostname="wildcard.USDAPICAST_ONPREM_STAGING_WILDCARD_DOMAIN" --insecure-policy=Allow --wildcard-policy=Subdomain USD oc create route edge apicast-wildcard-production --service=apicast-production --hostname="wildcard.USDAPICAST_ONPREM_PRODUCTION_WILDCARD_DOMAIN" --insecure-policy=Allow --wildcard-policy=Subdomain Navigate to Audience > Accounts > Listing in the Admin Portal. Click Developer . Save the Developer Account ID . This is the last part of the URL after /buyers/accounts/ : USD export ONPREM_DEVELOPER_ACCOUNT_ID=5 7.2.4. Deploying Red Hat single sign-on and Red Hat build of Keycloak for OpenID Connect If you are using the Hybrid - OpenID Connect (OIDC) or Semantic versioning sample pipelines, perform the steps in this section to deploy Red Hat single sign-on (SSO) or Red Hat build of Keycloak with 3scale. This is required for OIDC authentication, which is used in both samples. Procedure Deploy Red Hat single sign-on 7.6 as explained in the Red Hat single sign-on documentation or Red Hat build of Keycloak as explained in the Red Hat build of Keycloak . The following example commands provide a short summary for the SSO procedure: USD oc replace -n openshift --force -f https://raw.githubusercontent.com/jboss-container-images/redhat-sso-7-openshift-image/sso73-dev/templates/sso73-image-stream.json USD oc replace -n openshift --force -f https://raw.githubusercontent.com/jboss-container-images/redhat-sso-7-openshift-image/sso73-dev/templates/sso73-x509-postgresql-persistent.json USD oc -n openshift import-image redhat-sso73-openshift:1.0 USD oc policy add-role-to-user view system:serviceaccount:USD(oc project -q):default USD oc new-app --template=sso73-x509-postgresql-persistent --name=sso -p DB_USERNAME=sso -p SSO_ADMIN_USERNAME=admin -p DB_DATABASE=sso Save the host name of your Red Hat single sign-on installation for later use: USD export SSO_HOSTNAME="USD(oc get route sso -o jsonpath='{.spec.host}')" Configure Red Hat single sign-on for 3scale as explained in the 3scale API Management Developer Portal documentation . Save the realm name, client ID, and client secret for later use: USD export REALM=3scale USD export CLIENT_ID=3scale-admin USD export CLIENT_SECRET=123...456 7.2.5. Installing the 3scale API Management toolbox and enabling access This section describes how to install the toolbox, create your remote 3scale instance, and provision the secret used to access the Admin Portal. Procedure Install the 3scale toolbox locally as explained in The 3scale API Management toolbox . Run the appropriate toolbox command to create your 3scale remote instance: 3scale Hosted USD 3scale remote add 3scale-saas "https://USDSAAS_ACCESS_TOKEN@USDSAAS_TENANT-admin.3scale.net/" 3scale On-premises USD 3scale remote add 3scale-onprem "https://USDONPREM_ACCESS_TOKEN@USDONPREM_ADMIN_PORTAL_HOSTNAME/" Run the following OpenShift command to provision the secret containing your 3scale Admin Portal and access token: USD oc create secret generic 3scale-toolbox -n "USDTOOLBOX_NAMESPACE" --from-file="USDHOME/.3scalerc.yaml" 7.2.6. Deploying the API backends This section shows how to deploy the example API backends provided with the sample pipelines. You can substitute your own API backends as needed when creating and deploying your own pipelines. Procedure Deploy the example Beer Catalog API backend for use with the following samples: SaaS - API key Hybrid - open Hybrid - OIDC USD oc new-app -n "USDTOOLBOX_NAMESPACE" -i openshift/redhat-openjdk18-openshift:1.4 https://github.com/microcks/api-lifecycle.git --context-dir=/beer-catalog-demo/api-implementation --name=beer-catalog USD oc expose -n "USDTOOLBOX_NAMESPACE" svc/beer-catalog Save the Beer Catalog API host name for later use: USD export BEER_CATALOG_HOSTNAME="USD(oc get route -n "USDTOOLBOX_NAMESPACE" beer-catalog -o jsonpath='{.spec.host}')" Deploy the example Red Hat Event API backend for use with the following samples: Multi-environment Semantic versioning USD oc new-app -n "USDTOOLBOX_NAMESPACE" -i openshift/nodejs:10 'https://github.com/nmasse-itix/rhte-api.git#085b015' --name=event-api USD oc expose -n "USDTOOLBOX_NAMESPACE" svc/event-api Save the Event API host name for later use: USD export EVENT_API_HOSTNAME="USD(oc get route -n "USDTOOLBOX_NAMESPACE" event-api -o jsonpath='{.spec.host}')" 7.2.7. Deploying self-managed APIcast instances This section is for use with APIcast self-managed instances in 3scale Hosted environments. It applies to all of the sample pipelines except SaaS - API key . Procedure Define your wildcard routes: USD export APICAST_SELF_MANAGED_STAGING_WILDCARD_DOMAIN=saas-staging.USDOPENSHIFT_ROUTER_SUFFIX USD export APICAST_SELF_MANAGED_PRODUCTION_WILDCARD_DOMAIN=saas-production.USDOPENSHIFT_ROUTER_SUFFIX Deploy the APIcast self-managed instances in your project: USD oc create secret generic 3scale-tenant --from-literal=password=https://USDSAAS_ACCESS_TOKEN@USDSAAS_TENANT-admin.3scale.net USD oc create -f https://raw.githubusercontent.com/3scale/apicast/v3.5.0/openshift/apicast-template.yml USD oc new-app --template=3scale-gateway --name=apicast-staging -p CONFIGURATION_URL_SECRET=3scale-tenant -p CONFIGURATION_CACHE=0 -p RESPONSE_CODES=true -p LOG_LEVEL=info -p CONFIGURATION_LOADER=lazy -p APICAST_NAME=apicast-staging -p DEPLOYMENT_ENVIRONMENT=sandbox -p IMAGE_NAME=registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.15 USD oc new-app --template=3scale-gateway --name=apicast-production -p CONFIGURATION_URL_SECRET=3scale-tenant -p CONFIGURATION_CACHE=60 -p RESPONSE_CODES=true -p LOG_LEVEL=info -p CONFIGURATION_LOADER=boot -p APICAST_NAME=apicast-production -p DEPLOYMENT_ENVIRONMENT=production -p IMAGE_NAME=registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.15 USD oc scale deployment/apicast-staging --replicas=1 USD oc scale deployment/apicast-production --replicas=1 USD oc create route edge apicast-staging --service=apicast-staging --hostname="wildcard.USDAPICAST_SELF_MANAGED_STAGING_WILDCARD_DOMAIN" --insecure-policy=Allow --wildcard-policy=Subdomain USD oc create route edge apicast-production --service=apicast-production --hostname="wildcard.USDAPICAST_SELF_MANAGED_PRODUCTION_WILDCARD_DOMAIN" --insecure-policy=Allow --wildcard-policy=Subdomain 7.2.8. Installing and deploying the sample pipelines After you have set up the required environments, you can install and deploy the sample pipelines using the OpenShift templates provided for each of the sample use cases in the Red Hat Integration repository . For example, this section shows the SaaS - API Key sample only. Procedure Use the provided OpenShift template to install the Jenkins pipeline: USD oc process -f saas-usecase-apikey/setup.yaml \ -p DEVELOPER_ACCOUNT_ID="USDSAAS_DEVELOPER_ACCOUNT_ID" \ -p PRIVATE_BASE_URL="http://USDBEER_CATALOG_HOSTNAME" \ -p NAMESPACE="USDTOOLBOX_NAMESPACE" |oc create -f - Deploy the sample as follows: USD oc start-build saas-usecase-apikey Additional resource Sample use cases in the Red Hat Integration repository 7.2.9. Limitations of API lifecycle automation with 3scale API Management toolbox The following limitations apply in this release: OpenShift support The sample pipelines are supported on OpenShift 3.11 only. OpenShift 4 is currently not supported. For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page. Updating applications You can use the 3scale application apply toolbox command for applications to both create and update applications. Create commands support account, plan, service, and application key. Update commands do not support changes to account, plan, or service. If changes are passed, the pipelines will be triggered, no errors will be shown, but those fields will not be updated. Copying services When using the 3scale copy service toolbox command to copy a service with custom policies, you must copy the custom policies first and separately. 7.3. Creating pipelines using the 3scale API Management Jenkins shared library This section provides best practices for creating a custom Jenkins pipeline that uses the 3scale toolbox. It explains how to write a Jenkins pipeline in Groovy that uses the 3scale Jenkins shared library to call the toolbox based on an example application. For more details, see Jenkins shared libraries . Important Red Hat supports the Jenkins pipeline samples provided in the Red Hat Integration repository. Any modifications made to these pipelines are not directly supported by Red Hat. Custom pipelines that you create for your environment are not supported. Prerequisites Deploying the sample Jenkins CI/CD pipelines . You must have an OpenAPI specification file for your API. For example, you can generate this using Apicurio Studio . Procedure Add the following to the beginning of your Jenkins pipeline to reference the 3scale shared library from your pipeline: #!groovy library identifier: '3scale-toolbox-jenkins@master', retriever: modernSCM([USDclass: 'GitSCMSource', remote: 'https://github.com/rh-integration/3scale-toolbox-jenkins.git']) Declare a global variable to hold the ThreescaleService object so that you can use it from the different stages of your pipeline. def service = null Create the ThreescaleService with all the relevant information: stage("Prepare") { service = toolbox.prepareThreescaleService( openapi: [ filename: "swagger.json" ], environment: [ baseSystemName: "my_service" ], toolbox: [ openshiftProject: "toolbox", destination: "3scale-tenant", secretName: "3scale-toolbox" ], service: [:], applications: [ [ name: "my-test-app", description: "This is used for tests", plan: "test", account: "<CHANGE_ME>" ] ], applicationPlans: [ [ systemName: "test", name: "Test", defaultPlan: true, published: true ], [ systemName: "silver", name: "Silver" ], [ artefactFile: "https://raw.githubusercontent.com/my_username/API-Lifecycle-Mockup/master/testcase-01/plan.yaml"], ] ) echo "toolbox version = " + service.toolbox.getToolboxVersion() } openapi.filename is the path to the file containing the OpenAPI specification. environment.baseSystemName is used to compute the final system_name , based on environment.environmentName and the API major version from the OpenAPI specification info.version . toolbox.openshiftProject is the OpenShift project in which Kubernetes jobs will be created. toolbox.secretName is the name of the Kubernetes secret containing the 3scale toolbox configuration file, as shown in Installing the 3scale API Management toolbox and enabling access . toolbox.destination is the name of the 3scale toolbox remote instance. applicationPlans is a list of application plans to create by using a .yaml file or by providing application plan property details. Add a pipeline stage to provision the service in 3scale: stage("Import OpenAPI") { service.importOpenAPI() echo "Service with system_name USD{service.environment.targetSystemName} created !" } Add a stage to create the application plans: stage("Create an Application Plan") { service.applyApplicationPlans() } Add a global variable and a stage to create the test application: stage("Create an Application") { service.applyApplication() } Add a stage to run your integration tests. When using APIcast Hosted instances, you must fetch the proxy definition to extract the staging public URL: stage("Run integration tests") { def proxy = service.readProxy("sandbox") sh """set -e +x curl -f -w "ListBeers: %{http_code}\n" -o /dev/null -s USD{proxy.sandbox_endpoint}/api/beer -H 'api-key: USD{service.applications[0].userkey}' curl -f -w "GetBeer: %{http_code}\n" -o /dev/null -s USD{proxy.sandbox_endpoint}/api/beer/Weissbier -H 'api-key: USD{service.applications[0].userkey}' curl -f -w "FindBeersByStatus: %{http_code}\n" -o /dev/null -s USD{proxy.sandbox_endpoint}/api/beer/findByStatus/ available -H 'api-key: USD{service.applications[0].userkey}' """ } Add a stage to promote your API to production: stage("Promote to production") { service.promoteToProduction() } Additional resources Creating pipelines using a Jenkinsfile The 3scale API Management toolbox 7.4. Creating pipelines using a Jenkinsfile This section provides best practices for writing a custom Jenkinsfile from scratch in Groovy that uses the 3scale toolbox. Important Red Hat supports the Jenkins pipeline samples provided in the Red Hat Integration repository. Any modifications made to these pipelines are not directly supported by Red Hat. Custom pipelines that you create for your environment are not supported. This section is provided for reference only. Prerequisites Deploying the sample Jenkins CI/CD pipelines . You must have an OpenAPI specification file for your API. For example, you can generate this using Apicurio Studio . Procedure Write a utility function to call the 3scale toolbox. The following creates a Kubernetes job that runs the 3scale toolbox: #!groovy def runToolbox(args) { def kubernetesJob = [ "apiVersion": "batch/v1", "kind": "Job", "metadata": [ "name": "toolbox" ], "spec": [ "backoffLimit": 0, "activeDeadlineSeconds": 300, "template": [ "spec": [ "restartPolicy": "Never", "containers": [ [ "name": "job", "image": "registry.redhat.io/3scale-amp2/toolbox-rhel9:3scale2.15", "imagePullPolicy": "Always", "args": [ "3scale", "version" ], "env": [ [ "name": "HOME", "value": "/config" ] ], "volumeMounts": [ [ "mountPath": "/config", "name": "toolbox-config" ], [ "mountPath": "/artifacts", "name": "artifacts" ] ] ] ], "volumes": [ [ "name": "toolbox-config", "secret": [ "secretName": "3scale-toolbox" ] ], [ "name": "artifacts", "configMap": [ "name": "openapi" ] ] ] ] ] ] ] kubernetesJob.spec.template.spec.containers[0].args = args sh "rm -f -- job.yaml" writeYaml file: "job.yaml", data: kubernetesJob sh """set -e oc delete job toolbox --ignore-not-found sleep 2 oc create -f job.yaml sleep 20 # Adjust the sleep duration to your server velocity """ def logs = sh(script: "set -e; oc logs -f job/toolbox", returnStdout: true) echo logs return logs } Kubernetes object template This function uses a Kubernetes object template to run the 3scale toolbox, which you can adjust to your needs. It sets the 3scale toolbox CLI arguments and writes the resulting Kubernetes job definition to a YAML file, cleans up any run of the toolbox, creates the Kubernetes job, and waits: You can adjust the wait duration to your server velocity to match the time that a pod needs to transition between the Created and the Running state. You can refine this step using a polling loop. The OpenAPI specification file is fetched from a ConfigMap named openapi . The 3scale Admin Portal hostname and access token are fetched from a secret named 3scale-toolbox , as shown in Installing the 3scale API Management toolbox and enabling access . The ConfigMap will be created by the pipeline in step 3. However, the secret was already provisioned outside the pipeline and is subject to Role-Based Access Control (RBAC) for enhanced security. Define the global environment variables to use with 3scale toolbox in your Jenkins pipeline stages. For example: 3scale Hosted def targetSystemName = "saas-apikey-usecase" def targetInstance = "3scale-saas" def privateBaseURL = "http://echo-api.3scale.net" def testUserKey = "abcdef1234567890" def developerAccountId = "john" 3scale On-premises When using self-managed APIcast or an on-premises installation of 3scale, you must declare two more variables: def publicStagingBaseURL = "http://my-staging-api.example.test" def publicProductionBaseURL = "http://my-production-api.example.test" The variables are described as follows: targetSystemName : The name of the service to be created. targetInstance : This matches the name of the 3scale remote instance created in Installing the 3scale API Management toolbox and enabling access . privateBaseURL : The endpoint host of your API backend. testUserKey : The user API key used to run the integration tests. It can be hardcoded as shown or generated from an HMAC function. developerAccountId : The ID of the target account in which the test application will be created. publicStagingBaseURL : The public staging base URL of the service to be created. publicProductionBaseURL : The public production base URL of the service to be created. Add a pipeline stage to fetch the OpenAPI specification file and provision it as a ConfigMap on OpenShift as follows: node() { stage("Fetch OpenAPI") { sh """set -e curl -sfk -o swagger.json https://raw.githubusercontent.com/microcks/api-lifecycle/master/beer-catalog-demo/api-contracts/beer-catalog-api-swagger.json oc delete configmap openapi --ignore-not-found oc create configmap openapi --from-file="swagger.json" """ } Add a pipeline stage that uses the 3scale toolbox to import the API into 3scale: 3scale Hosted stage("Import OpenAPI") { runToolbox([ "3scale", "import", "openapi", "-d", targetInstance, "/artifacts/swagger.json", "--override-private-base-url=USD{privateBaseURL}", "-t", targetSystemName ]) } 3scale On-premises When using self-managed APIcast or an on-premises installation of 3scale, you must also specify the options for the public staging and production base URLs: stage("Import OpenAPI") { runToolbox([ "3scale", "import", "openapi", "-d", targetInstance, "/artifacts/swagger.json", "--override-private-base-url=USD{privateBaseURL}", "-t", targetSystemName, "--production-public-base-url=USD{publicProductionBaseURL}", "--staging-public-base-url=USD{publicStagingBaseURL}" ]) } Add pipeline stages that use the toolbox to create a 3scale application plan and an application: stage("Create an Application Plan") { runToolbox([ "3scale", "application-plan", "apply", targetInstance, targetSystemName, "test", "-n", "Test Plan", "--default" ]) } stage("Create an Application") { runToolbox([ "3scale", "application", "apply", targetInstance, testUserKey, "--account=USD{developerAccountId}", "--name=Test Application", "--description=Created by Jenkins", "--plan=test", "--service=USD{targetSystemName}" ]) } stage("Run integration tests") { def proxyDefinition = runToolbox([ "3scale", "proxy", "show", targetInstance, targetSystemName, "sandbox" ]) def proxy = readJSON text: proxyDefinition proxy = proxy.content.proxy sh """set -e echo "Public Staging Base URL is USD{proxy.sandbox_endpoint}" echo "userkey is USD{testUserKey}" curl -vfk USD{proxy.sandbox_endpoint}/beer -H 'api-key: USD{testUserKey}' curl -vfk USD{proxy.sandbox_endpoint}/beer/Weissbier -H 'api-key: USD{testUserKey}' curl -vfk USD{proxy.sandbox_endpoint}/beer/findByStatus/available -H 'api-key: USD{testUserKey}' """ } Add a stage that uses the toolbox to promote the API to your production environment. stage("Promote to production") { runToolbox([ "3scale", "proxy", "promote", targetInstance, targetSystemName ]) } Additional resources Creating pipelines using a Jenkinsfile The 3scale API Management toolbox
[ "export SAAS_ACCESS_TOKEN=123...456", "export SAAS_TENANT=my_username", "export SAAS_DEVELOPER_ACCOUNT_ID=123...456", "export SAAS_ACCESS_TOKEN=123...456", "export ONPREM_ADMIN_PORTAL_HOSTNAME=\"USD(oc get route system-provider-admin -o jsonpath='{.spec.host}')\"", "export OPENSHIFT_ROUTER_SUFFIX=app.openshift.test # Replace me! export APICAST_ONPREM_STAGING_WILDCARD_DOMAIN=onprem-staging.USDOPENSHIFT_ROUTER_SUFFIX export APICAST_ONPREM_PRODUCTION_WILDCARD_DOMAIN=onprem-production.USDOPENSHIFT_ROUTER_SUFFIX", "oc create route edge apicast-wildcard-staging --service=apicast-staging --hostname=\"wildcard.USDAPICAST_ONPREM_STAGING_WILDCARD_DOMAIN\" --insecure-policy=Allow --wildcard-policy=Subdomain oc create route edge apicast-wildcard-production --service=apicast-production --hostname=\"wildcard.USDAPICAST_ONPREM_PRODUCTION_WILDCARD_DOMAIN\" --insecure-policy=Allow --wildcard-policy=Subdomain", "export ONPREM_DEVELOPER_ACCOUNT_ID=5", "oc replace -n openshift --force -f https://raw.githubusercontent.com/jboss-container-images/redhat-sso-7-openshift-image/sso73-dev/templates/sso73-image-stream.json oc replace -n openshift --force -f https://raw.githubusercontent.com/jboss-container-images/redhat-sso-7-openshift-image/sso73-dev/templates/sso73-x509-postgresql-persistent.json oc -n openshift import-image redhat-sso73-openshift:1.0 oc policy add-role-to-user view system:serviceaccount:USD(oc project -q):default oc new-app --template=sso73-x509-postgresql-persistent --name=sso -p DB_USERNAME=sso -p SSO_ADMIN_USERNAME=admin -p DB_DATABASE=sso", "export SSO_HOSTNAME=\"USD(oc get route sso -o jsonpath='{.spec.host}')\"", "export REALM=3scale export CLIENT_ID=3scale-admin export CLIENT_SECRET=123...456", "3scale remote add 3scale-saas \"https://USDSAAS_ACCESS_TOKEN@USDSAAS_TENANT-admin.3scale.net/\"", "3scale remote add 3scale-onprem \"https://USDONPREM_ACCESS_TOKEN@USDONPREM_ADMIN_PORTAL_HOSTNAME/\"", "oc create secret generic 3scale-toolbox -n \"USDTOOLBOX_NAMESPACE\" --from-file=\"USDHOME/.3scalerc.yaml\"", "oc new-app -n \"USDTOOLBOX_NAMESPACE\" -i openshift/redhat-openjdk18-openshift:1.4 https://github.com/microcks/api-lifecycle.git --context-dir=/beer-catalog-demo/api-implementation --name=beer-catalog oc expose -n \"USDTOOLBOX_NAMESPACE\" svc/beer-catalog", "export BEER_CATALOG_HOSTNAME=\"USD(oc get route -n \"USDTOOLBOX_NAMESPACE\" beer-catalog -o jsonpath='{.spec.host}')\"", "oc new-app -n \"USDTOOLBOX_NAMESPACE\" -i openshift/nodejs:10 'https://github.com/nmasse-itix/rhte-api.git#085b015' --name=event-api oc expose -n \"USDTOOLBOX_NAMESPACE\" svc/event-api", "export EVENT_API_HOSTNAME=\"USD(oc get route -n \"USDTOOLBOX_NAMESPACE\" event-api -o jsonpath='{.spec.host}')\"", "export APICAST_SELF_MANAGED_STAGING_WILDCARD_DOMAIN=saas-staging.USDOPENSHIFT_ROUTER_SUFFIX export APICAST_SELF_MANAGED_PRODUCTION_WILDCARD_DOMAIN=saas-production.USDOPENSHIFT_ROUTER_SUFFIX", "oc create secret generic 3scale-tenant --from-literal=password=https://USDSAAS_ACCESS_TOKEN@USDSAAS_TENANT-admin.3scale.net oc create -f https://raw.githubusercontent.com/3scale/apicast/v3.5.0/openshift/apicast-template.yml oc new-app --template=3scale-gateway --name=apicast-staging -p CONFIGURATION_URL_SECRET=3scale-tenant -p CONFIGURATION_CACHE=0 -p RESPONSE_CODES=true -p LOG_LEVEL=info -p CONFIGURATION_LOADER=lazy -p APICAST_NAME=apicast-staging -p DEPLOYMENT_ENVIRONMENT=sandbox -p IMAGE_NAME=registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.15 oc new-app --template=3scale-gateway --name=apicast-production -p CONFIGURATION_URL_SECRET=3scale-tenant -p CONFIGURATION_CACHE=60 -p RESPONSE_CODES=true -p LOG_LEVEL=info -p CONFIGURATION_LOADER=boot -p APICAST_NAME=apicast-production -p DEPLOYMENT_ENVIRONMENT=production -p IMAGE_NAME=registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.15 oc scale deployment/apicast-staging --replicas=1 oc scale deployment/apicast-production --replicas=1 oc create route edge apicast-staging --service=apicast-staging --hostname=\"wildcard.USDAPICAST_SELF_MANAGED_STAGING_WILDCARD_DOMAIN\" --insecure-policy=Allow --wildcard-policy=Subdomain oc create route edge apicast-production --service=apicast-production --hostname=\"wildcard.USDAPICAST_SELF_MANAGED_PRODUCTION_WILDCARD_DOMAIN\" --insecure-policy=Allow --wildcard-policy=Subdomain", "oc process -f saas-usecase-apikey/setup.yaml -p DEVELOPER_ACCOUNT_ID=\"USDSAAS_DEVELOPER_ACCOUNT_ID\" -p PRIVATE_BASE_URL=\"http://USDBEER_CATALOG_HOSTNAME\" -p NAMESPACE=\"USDTOOLBOX_NAMESPACE\" |oc create -f -", "oc start-build saas-usecase-apikey", "#!groovy library identifier: '3scale-toolbox-jenkins@master', retriever: modernSCM([USDclass: 'GitSCMSource', remote: 'https://github.com/rh-integration/3scale-toolbox-jenkins.git'])", "def service = null", "stage(\"Prepare\") { service = toolbox.prepareThreescaleService( openapi: [ filename: \"swagger.json\" ], environment: [ baseSystemName: \"my_service\" ], toolbox: [ openshiftProject: \"toolbox\", destination: \"3scale-tenant\", secretName: \"3scale-toolbox\" ], service: [:], applications: [ [ name: \"my-test-app\", description: \"This is used for tests\", plan: \"test\", account: \"<CHANGE_ME>\" ] ], applicationPlans: [ [ systemName: \"test\", name: \"Test\", defaultPlan: true, published: true ], [ systemName: \"silver\", name: \"Silver\" ], [ artefactFile: \"https://raw.githubusercontent.com/my_username/API-Lifecycle-Mockup/master/testcase-01/plan.yaml\"], ] ) echo \"toolbox version = \" + service.toolbox.getToolboxVersion() }", "stage(\"Import OpenAPI\") { service.importOpenAPI() echo \"Service with system_name USD{service.environment.targetSystemName} created !\" }", "stage(\"Create an Application Plan\") { service.applyApplicationPlans() }", "stage(\"Create an Application\") { service.applyApplication() }", "stage(\"Run integration tests\") { def proxy = service.readProxy(\"sandbox\") sh \"\"\"set -e +x curl -f -w \"ListBeers: %{http_code}\\n\" -o /dev/null -s USD{proxy.sandbox_endpoint}/api/beer -H 'api-key: USD{service.applications[0].userkey}' curl -f -w \"GetBeer: %{http_code}\\n\" -o /dev/null -s USD{proxy.sandbox_endpoint}/api/beer/Weissbier -H 'api-key: USD{service.applications[0].userkey}' curl -f -w \"FindBeersByStatus: %{http_code}\\n\" -o /dev/null -s USD{proxy.sandbox_endpoint}/api/beer/findByStatus/ available -H 'api-key: USD{service.applications[0].userkey}' \"\"\" }", "stage(\"Promote to production\") { service.promoteToProduction() }", "#!groovy def runToolbox(args) { def kubernetesJob = [ \"apiVersion\": \"batch/v1\", \"kind\": \"Job\", \"metadata\": [ \"name\": \"toolbox\" ], \"spec\": [ \"backoffLimit\": 0, \"activeDeadlineSeconds\": 300, \"template\": [ \"spec\": [ \"restartPolicy\": \"Never\", \"containers\": [ [ \"name\": \"job\", \"image\": \"registry.redhat.io/3scale-amp2/toolbox-rhel9:3scale2.15\", \"imagePullPolicy\": \"Always\", \"args\": [ \"3scale\", \"version\" ], \"env\": [ [ \"name\": \"HOME\", \"value\": \"/config\" ] ], \"volumeMounts\": [ [ \"mountPath\": \"/config\", \"name\": \"toolbox-config\" ], [ \"mountPath\": \"/artifacts\", \"name\": \"artifacts\" ] ] ] ], \"volumes\": [ [ \"name\": \"toolbox-config\", \"secret\": [ \"secretName\": \"3scale-toolbox\" ] ], [ \"name\": \"artifacts\", \"configMap\": [ \"name\": \"openapi\" ] ] ] ] ] ] ] kubernetesJob.spec.template.spec.containers[0].args = args sh \"rm -f -- job.yaml\" writeYaml file: \"job.yaml\", data: kubernetesJob sh \"\"\"set -e oc delete job toolbox --ignore-not-found sleep 2 oc create -f job.yaml sleep 20 # Adjust the sleep duration to your server velocity \"\"\" def logs = sh(script: \"set -e; oc logs -f job/toolbox\", returnStdout: true) echo logs return logs }", "def targetSystemName = \"saas-apikey-usecase\" def targetInstance = \"3scale-saas\" def privateBaseURL = \"http://echo-api.3scale.net\" def testUserKey = \"abcdef1234567890\" def developerAccountId = \"john\"", "def publicStagingBaseURL = \"http://my-staging-api.example.test\" def publicProductionBaseURL = \"http://my-production-api.example.test\"", "node() { stage(\"Fetch OpenAPI\") { sh \"\"\"set -e curl -sfk -o swagger.json https://raw.githubusercontent.com/microcks/api-lifecycle/master/beer-catalog-demo/api-contracts/beer-catalog-api-swagger.json oc delete configmap openapi --ignore-not-found oc create configmap openapi --from-file=\"swagger.json\" \"\"\" }", "stage(\"Import OpenAPI\") { runToolbox([ \"3scale\", \"import\", \"openapi\", \"-d\", targetInstance, \"/artifacts/swagger.json\", \"--override-private-base-url=USD{privateBaseURL}\", \"-t\", targetSystemName ]) }", "stage(\"Import OpenAPI\") { runToolbox([ \"3scale\", \"import\", \"openapi\", \"-d\", targetInstance, \"/artifacts/swagger.json\", \"--override-private-base-url=USD{privateBaseURL}\", \"-t\", targetSystemName, \"--production-public-base-url=USD{publicProductionBaseURL}\", \"--staging-public-base-url=USD{publicStagingBaseURL}\" ]) }", "stage(\"Create an Application Plan\") { runToolbox([ \"3scale\", \"application-plan\", \"apply\", targetInstance, targetSystemName, \"test\", \"-n\", \"Test Plan\", \"--default\" ]) } stage(\"Create an Application\") { runToolbox([ \"3scale\", \"application\", \"apply\", targetInstance, testUserKey, \"--account=USD{developerAccountId}\", \"--name=Test Application\", \"--description=Created by Jenkins\", \"--plan=test\", \"--service=USD{targetSystemName}\" ]) }", "stage(\"Run integration tests\") { def proxyDefinition = runToolbox([ \"3scale\", \"proxy\", \"show\", targetInstance, targetSystemName, \"sandbox\" ]) def proxy = readJSON text: proxyDefinition proxy = proxy.content.proxy sh \"\"\"set -e echo \"Public Staging Base URL is USD{proxy.sandbox_endpoint}\" echo \"userkey is USD{testUserKey}\" curl -vfk USD{proxy.sandbox_endpoint}/beer -H 'api-key: USD{testUserKey}' curl -vfk USD{proxy.sandbox_endpoint}/beer/Weissbier -H 'api-key: USD{testUserKey}' curl -vfk USD{proxy.sandbox_endpoint}/beer/findByStatus/available -H 'api-key: USD{testUserKey}' \"\"\" }", "stage(\"Promote to production\") { runToolbox([ \"3scale\", \"proxy\", \"promote\", targetInstance, targetSystemName ]) }" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/api-lifecyle-toolbox-3scale
Chapter 6. Disabling disaster recovery for a disaster recovery enabled application
Chapter 6. Disabling disaster recovery for a disaster recovery enabled application This section guides you to disable disaster recovery (DR) for an application deployed using Red Hat Advanced Cluster Management (RHACM). Disabling DR managed applications . Disabling DR discovered applications . 6.1. Disabling DR managed applications On the Hub cluster, navigate to All Clusters Applications . In the Overview tab, at the end of the protected application row from the action menu, select Manage disaster recovery . Click Remove disaster recovery . Click Confirm remove . Warning Your application will lose disaster recovery protection, preventing volume synchronization (replication) between clusters. Note The application continues to be visible in the Applications Overview menu but the Data policy is removed. 6.2. Disabling DR discovered applications In the RHACM console, navigate to All Clusters Data Services Protected applications tab. At the end of the application row, click on the Actions menu and choose Remove disaster recovery . Click Remove in the prompt. Warning Your application will lose disaster recovery protection, preventing volume synchronization (replication) between clusters. Note The application is no longer in the Protected applications tab once the DR is removed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/disabling_disaster_recovery_for_a_disaster_recovery_enabled_application
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/configuring_red_hat_build_of_openjdk_17_on_rhel/proc-providing-feedback-on-redhat-documentation
Chapter 4. Configuring a cluster-wide proxy
Chapter 4. Configuring a cluster-wide proxy If you are using an existing Virtual Private Cloud (VPC), you can configure a cluster-wide proxy during a Red Hat OpenShift Service on AWS (ROSA) cluster installation or after the cluster is installed. When you enable a proxy, the core cluster components are denied direct access to the internet, but the proxy does not affect user workloads. Note Only cluster system egress traffic is proxied, including calls to the cloud provider API. If you use a cluster-wide proxy, you are responsible for maintaining the availability of the proxy to the cluster. If the proxy becomes unavailable, then it might impact the health and supportability of the cluster. 4.1. Prerequisites for configuring a cluster-wide proxy To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation. General requirements You are the cluster owner. Your account has sufficient privileges. You have an existing Virtual Private Cloud (VPC) for your cluster. The proxy can access the VPC for the cluster and the private subnets of the VPC. The proxy is also accessible from the VPC for the cluster and from the private subnets of the VPC. You have added the following endpoints to your VPC endpoint: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works at the container level and not at the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not enough. Important When using a cluster-wide proxy, you must configure the s3.<aws_region>.amazonaws.com endpoint as type Gateway . Network requirements If your proxy re-encrypts egress traffic, you must create exclusions to the domain and port combinations. The following table offers guidance into these exceptions. Your proxy must exclude re-encrypting the following OpenShift URLs: Address Protocol/Port Function observatorium-mst.api.openshift.com https/443 Required. Used for Managed OpenShift-specific telemetry. sso.redhat.com https/443 The https://cloud.redhat.com/openshift site uses authentication from sso.redhat.com to download the cluster pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, and chargeback reporting. Additional resources For the installation prerequisites for ROSA clusters that use the AWS Security Token Service (STS), see AWS prerequisites for ROSA with STS . For the installation prerequisites for ROSA clusters that do not use STS, see AWS prerequisites for ROSA . 4.2. Responsibilities for additional trust bundles If you supply an additional trust bundle, you are responsible for the following requirements: Ensuring that the contents of the additional trust bundle are valid Ensuring that the certificates, including intermediary certificates, contained in the additional trust bundle have not expired Tracking the expiry and performing any necessary renewals for certificates contained in the additional trust bundle Updating the cluster configuration with the updated additional trust bundle 4.3. Configuring a proxy during installation You can configure an HTTP or HTTPS proxy when you install a Red Hat OpenShift Service on AWS (ROSA) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy during installation by using Red Hat OpenShift Cluster Manager or the ROSA CLI ( rosa ). 4.3.1. Configuring a proxy during installation using OpenShift Cluster Manager If you are installing a Red Hat OpenShift Service on AWS (ROSA) cluster into an existing Virtual Private Cloud (VPC), you can use Red Hat OpenShift Cluster Manager to enable a cluster-wide HTTP or HTTPS proxy during installation. Prior to the installation, you must verify that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC. For detailed steps to configure a cluster-wide proxy during installation by using OpenShift Cluster Manager, see Creating a cluster with customizations by using OpenShift Cluster Manager . 4.3.2. Configuring a proxy during installation using the CLI If you are installing a Red Hat OpenShift Service on AWS (ROSA) cluster into an existing Virtual Private Cloud (VPC), you can use the ROSA CLI ( rosa ) to enable a cluster-wide HTTP or HTTPS proxy during installation. The following procedure provides details about the ROSA CLI ( rosa ) arguments that are used to configure a cluster-wide proxy during installation. For general installation steps using the ROSA CLI, see Creating a cluster with customizations using the CLI . Prerequisites You have verified that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC. Procedure Specify a proxy configuration when you create your cluster: USD rosa create cluster \ <other_arguments_here> \ --additional-trust-bundle-file <path_to_ca_bundle_file> \ 1 2 3 --http-proxy http://<username>:<password>@<ip>:<port> \ 4 5 --https-proxy https://<username>:<password>@<ip>:<port> \ 6 7 --no-proxy example.com 8 1 4 6 The additional-trust-bundle-file , http-proxy , and https-proxy arguments are all optional. 2 The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is required for users who use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. 3 5 7 The http-proxy and https-proxy arguments must point to a valid URL. 8 A comma-separated list of destination domain names, IP addresses, or network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. Additional resources Creating a cluster with customizations by using OpenShift Cluster Manager Creating a cluster with customizations using the CLI 4.4. Configuring a proxy after installation You can configure an HTTP or HTTPS proxy after you install a Red Hat OpenShift Service on AWS (ROSA) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy after installation by using Red Hat OpenShift Cluster Manager or the ROSA CLI ( rosa ). 4.4.1. Configuring a proxy after installation using OpenShift Cluster Manager You can use Red Hat OpenShift Cluster Manager to add a cluster-wide proxy configuration to an existing Red Hat OpenShift Service on AWS cluster in a Virtual Private Cloud (VPC). You can also use OpenShift Cluster Manager to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire. Important The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process. Prerequisites You have an Red Hat OpenShift Service on AWS cluster . Your cluster is deployed in a VPC. Procedure Navigate to OpenShift Cluster Manager and select your cluster. Under the Virtual Private Cloud (VPC) section on the Networking page, click Edit cluster-wide proxy . On the Edit cluster-wide proxy page, provide your proxy configuration details: Enter a value in at least one of the following fields: Specify a valid HTTP proxy URL . Specify a valid HTTPS proxy URL . In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. If you are replacing an existing trust bundle file, select Replace file to view the field. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Click Confirm . Verification Under the Virtual Private Cloud (VPC) section on the Networking page, verify that the proxy configuration for your cluster is as expected. 4.4.2. Configuring a proxy after installation using the CLI You can use the Red Hat OpenShift Service on AWS (ROSA) CLI ( rosa ) to add a cluster-wide proxy configuration to an existing ROSA cluster in a Virtual Private Cloud (VPC). You can also use rosa to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire. Important The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process. Prerequisites You have installed and configured the latest ROSA ( rosa ) and OpenShift ( oc ) CLIs on your installation host. You have a ROSA cluster that is deployed in a VPC. Procedure Edit the cluster configuration to add or update the cluster-wide proxy details: USD rosa edit cluster \ --cluster USDCLUSTER_NAME \ --additional-trust-bundle-file <path_to_ca_bundle_file> \ 1 2 3 --http-proxy http://<username>:<password>@<ip>:<port> \ 4 5 --https-proxy https://<username>:<password>@<ip>:<port> \ 6 7 --no-proxy example.com 8 1 4 6 The additional-trust-bundle-file , http-proxy , and https-proxy arguments are all optional. 2 The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is a file path pointing to a bundle of PEM-encoded X.509 certificates, which are all concatenated together. The additional-trust-bundle-file argument is required for users who use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Note You should not attempt to change the proxy or additional trust bundle configuration on the cluster directly. These changes must be applied by using the ROSA CLI ( rosa ) or Red Hat OpenShift Cluster Manager. Any changes that are made directly to the cluster will be reverted automatically. 3 5 7 The http-proxy and https-proxy arguments must point to a valid URL. 8 A comma-separated list of destination domain names, IP addresses, or network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. Verification List the status of the machine config pools and verify that they are updated: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d9a03f612a432095dcde6dcf44597d90 True False False 3 3 3 0 31h worker rendered-worker-f6827a4efe21e155c25c21b43c46f65e True False False 6 6 6 0 31h Display the proxy configuration for your cluster and verify that the details are as expected: USD oc get proxy cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: Proxy spec: httpProxy: http://proxy.host.domain:<port> httpsProxy: https://proxy.host.domain:<port> <...more...> status: httpProxy: http://proxy.host.domain:<port> httpsProxy: https://proxy.host.domain:<port> <...more...> 4.5. Removing a cluster-wide proxy You can remove your cluster-wide proxy by using the ROSA CLI. After removing the cluster, you should also remove any trust bundles that are added to the cluster. 4.5.1. Removing the cluster-wide proxy using CLI You must use the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , to remove the proxy's address from your cluster. Prerequisites You must have cluster administrator privileges. You have installed the ROSA CLI ( rosa ). Procedure Use the rosa edit command to modify the proxy. You must pass empty strings to the --http-proxy and --https-proxy arguments to clear the proxy from the cluster: USD rosa edit cluster -c <cluster_name> --http-proxy "" --https-proxy "" Note While your proxy might only use one of the proxy arguments, the empty fields are ignored, so passing empty strings to both the --http-proxy and --https-proxy arguments do not cause any issues. Example Output I: Updated cluster <cluster_name> Verification You can verify that the proxy has been removed from the cluster by using the rosa describe command: USD rosa describe cluster -c <cluster_name> Before removal, the proxy IP displays in a proxy section: Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Proxy: - HTTPProxy: <proxy_url> Additional trust bundle: REDACTED After removing the proxy, the proxy section is removed: Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Additional trust bundle: REDACTED 4.5.2. Removing certificate authorities on a Red Hat OpenShift Service on AWS cluster You can remove certificate authorities (CA) from your cluster with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Prerequisites You must have cluster administrator privileges. You have installed the ROSA CLI ( rosa ). Your cluster has certificate authorities added. Procedure Use the rosa edit command to modify the CA trust bundle. You must pass empty strings to the --additional-trust-bundle-file argument to clear the trust bundle from the cluster: USD rosa edit cluster -c <cluster_name> --additional-trust-bundle-file "" Example Output I: Updated cluster <cluster_name> Verification You can verify that the trust bundle has been removed from the cluster by using the rosa describe command: USD rosa describe cluster -c <cluster_name> Before removal, the Additional trust bundle section appears, redacting its value for security purposes: Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Proxy: - HTTPProxy: <proxy_url> Additional trust bundle: REDACTED After removing the proxy, the Additional trust bundle section is removed: Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Proxy: - HTTPProxy: <proxy_url>
[ "rosa create cluster <other_arguments_here> --additional-trust-bundle-file <path_to_ca_bundle_file> \\ 1 2 3 --http-proxy http://<username>:<password>@<ip>:<port> \\ 4 5 --https-proxy https://<username>:<password>@<ip>:<port> \\ 6 7 --no-proxy example.com 8", "rosa edit cluster --cluster USDCLUSTER_NAME --additional-trust-bundle-file <path_to_ca_bundle_file> \\ 1 2 3 --http-proxy http://<username>:<password>@<ip>:<port> \\ 4 5 --https-proxy https://<username>:<password>@<ip>:<port> \\ 6 7 --no-proxy example.com 8", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d9a03f612a432095dcde6dcf44597d90 True False False 3 3 3 0 31h worker rendered-worker-f6827a4efe21e155c25c21b43c46f65e True False False 6 6 6 0 31h", "oc get proxy cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Proxy spec: httpProxy: http://proxy.host.domain:<port> httpsProxy: https://proxy.host.domain:<port> <...more...> status: httpProxy: http://proxy.host.domain:<port> httpsProxy: https://proxy.host.domain:<port> <...more...>", "rosa edit cluster -c <cluster_name> --http-proxy \"\" --https-proxy \"\"", "I: Updated cluster <cluster_name>", "rosa describe cluster -c <cluster_name>", "Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Proxy: - HTTPProxy: <proxy_url> Additional trust bundle: REDACTED", "Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Additional trust bundle: REDACTED", "rosa edit cluster -c <cluster_name> --additional-trust-bundle-file \"\"", "I: Updated cluster <cluster_name>", "rosa describe cluster -c <cluster_name>", "Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Proxy: - HTTPProxy: <proxy_url> Additional trust bundle: REDACTED", "Name: <cluster_name> ID: <cluster_internal_id> External ID: <cluster_external_id> OpenShift Version: 4.0 Channel Group: stable DNS: <dns> AWS Account: <aws_account_id> API URL: <api_url> Console URL: <console_url> Region: us-east-1 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: <host_prefix> Proxy: - HTTPProxy: <proxy_url>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/configuring-a-cluster-wide-proxy
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/getting_started_with_amq_broker/making-open-source-more-inclusive
Chapter 10. Creating a customized RHEL Guest image by using Insights image builder
Chapter 10. Creating a customized RHEL Guest image by using Insights image builder You can create customized RHEL guest system images by using Insights image builder. You can then download these images to create virtual machines from these guest images according to your requirements. 10.1. Creating a customized RHEL Guest system image by using Insights image builder Complete the following steps to create customized RHEL Guest .qcow2 images by using Insights image builder. Procedure Access Insights image builder in your browser. You are redirected to the Insights image builder dashboard. Click Create image . The Create image wizard opens. On the Image output page, complete the following steps: From the Releases list, select the release of Red Hat Enterprise Linux (RHEL) that you want to use to create the image. From the Select target environments options, select Virtualization - Guest image . Click . On the Registration page, select the type of registration that you want to use. You can select from these options: Register images with Red Hat : Register and connect image instances, subscriptions and insights with Red Hat. For details on how to embed an activation key and register systems on first boot, see Creating a customized system image with an embed subscription by using Insights image builder . Register image instances only : Register and connect only image instances and subscriptions with Red Hat. Register later : Register the system after the image creation. Click . Optional: On the Packages page, add packages to your image. See Adding packages during image creation by using Insights image builder . On the Name image page, enter a name for your image and click . If you do not enter a name, you can find the image you created by its UUID. On the Review page, review the details about the image creation and click Create image . After you complete the steps in the Create image wizard, the Image Builder dashboard is displayed. When the new image displays a Ready status in the Status column, click Download .qcow2 image in the Instance column. The .qcow2 image is saved to your system and is ready for deployment. Note The .qcow2 images are available for 6 hours and expire after that. Ensure that you download the image to avoid losing it. 10.2. Creating a virtual machine from the customized RHEL Guest system image You can create a virtual machine (VM) from the QCOW2 image that you created by using Insights image builder. Prerequisites You created and downloaded a QCOW2 image by using Insights image builder. Procedure Access the directory where you downloaded your QCOW2 image. Create a file named meta-data . Add the following information to this file: Create a file named user-data . Add the following information to the file: ssh_authorized_keys is your SSH public key. You can find your SSH public key in ~/.ssh/id_rsa.pub . Use the genisoimage command to create an ISO image that includes the user-data and meta-data files. Create a new VM from the KVM Guest Image using the virt-install command. Include the ISO image you created on step 4 as an attachment to the VM image. Where, --graphics none - indicates that it is a headless RHEL Virtual Machine. --vcpus 4 - indicates that it uses 4 virtual CPUs. --memory 4096 - indicates that it uses 4096 MB RAM. The VM installation starts: Additional resources Creating virtual machines using the command-line interface
[ "instance-id: nocloud local-hostname: vmname", "#cloud-config user: admin password: password chpasswd: {expire: False} ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...fhHQ== [email protected]", "genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data I: -input-charset not specified, using utf-8 (detected in locale settings) Total translation table size: 0 Total rockridge attributes bytes: 331 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 0 183 extents written (0 MB)", "virt-install --memory 4096 --vcpus 4 --name myvm --disk composer-api.qcow2,device=disk,bus=virtio,format=qcow2 --disk cloud-init.iso,device=cdrom --os-variant rhel8 --virt-type kvm --graphics none --import", "Starting install Connected to domain myvm [ OK ] Started Execute cloud user/final scripts. [ OK ] Reached target Cloud-init target. Red Hat Enterprise Linux 8 (Ootpa) Kernel 4.18.0-221.el8.x86_64 on an x86_64" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/assembly_creating-a-customized-rhel-guest-image-using-red-hat-image-builder
Chapter 3. Creating and executing DMN and BPMN models using Maven
Chapter 3. Creating and executing DMN and BPMN models using Maven You can use Maven archetypes to develop DMN and BPMN models in VS Code using the Red Hat Process Automation Manager VS Code extension instead of Business Central. You can then integrate your archetypes with your Red Hat Process Automation Manager decision and process services in Business Central as needed. This method of developing DMN and BPMN models is helpful for building new business applications using the Red Hat Process Automation Manager VS Code extension. Procedure In a command terminal, navigate to a local folder where you want to store the new Red Hat Process Automation Manager project. Enter the following command to use a Maven archtype to generate a project within a defined folder: Generating a project using Maven archetype This command generates a Maven project with required dependencies and generates required directories and files to build your business application. You can use the Git version control system (recommended) when developing a project. If you want to generate multiple projects in the same directory, specify the artifactId and groupId of the generated business application by adding -DgroupId=<groupid> -DartifactId=<artifactId> to the command. In your VS Code IDE, click File , select Open Folder , and navigate to the folder that is generated using the command. Before creating the first asset, set a package for your business application, for example, org.kie.businessapp , and create respective directories in the following paths: PROJECT_HOME/src/main/java PROJECT_HOME/src/main/resources PROJECT_HOME/src/test/resources For example, you can create PROJECT_HOME/src/main/java/org/kie/businessapp for org.kie.businessapp package. Use VS Code to create assets for your business application. You can create the assets supported by Red Hat Process Automation Manager VS Code extension using the following ways: To create a business process, create a new file with .bpmn or .bpmn2 in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as Process.bpmn . To create a DMN model, create a new file with .dmn in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as AgeDecision.dmn . To create a test scenario simulation model, create a new file with .scesim in PROJECT_HOME/src/test/resources/org/kie/businessapp directory, such as TestAgeScenario.scesim . After you create the assets in your Maven archetype, navigate to the root directory (contains pom.xml ) of the project in the command line and run the following command to build the knowledge JAR (KJAR) of your project: If the build fails, address any problems described in the command line error messages and try again to validate the project until the build is successful. However, if the build is successful, you can find the artifact of your business application in PROJECT_HOME/target directory. Note Use mvn clean install command often to validate your project after each major change during development. You can deploy the generated knowledge JAR (KJAR) of your business application on a running KIE Server using the REST API. For more information about using REST API, see Interacting with Red Hat Process Automation Manager using KIE APIs .
[ "mvn archetype:generate -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024", "mvn clean install" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/proc-dmn-bpmn-maven-create_getting-started-decision-services
Chapter 3. Post-deployment configuration
Chapter 3. Post-deployment configuration You must complete two post-deployment configuration tasks before you create NFS shares, grant user access, and mount NFS shares. Map the Networking service (neutron) StorageNFS network to the isolated data center Storage NFS network. You can omit this option if you do not want to isolate NFS traffic to a separate network. For more information, see Generating the custom roles file . Create the default share type. After you complete these steps, the tenant compute instances can create, allow access to, and mount NFS shares. 3.1. Creating the storage provider network You must map the new isolated StorageNFS network to a Networking (neutron) provider network. The Compute VMs attach to the network to access share export locations that are provided by the NFS-Ganesha gateway. For information about network security with the Shared File Systems service, see Hardening the Shared File Systems Service in the Security and Hardening Guide . Procedure The openstack network create command defines the configuration for the StorageNFS neutron network. From an undercloud node, enter the following command: On an undercloud node, create the StorageNFS network: You can enter this command with the following options: For the --provider-physical-network option, use the default value datacentre , unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates. For the --provider-segment option, use the VLAN value set for the StorageNFS isolated network in the heat template, /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml . This value is 70, unless the deployer modified the isolated network definitions. For the --provider-network-type option, use the value vlan . 3.2. Configure the shared provider StorageNFS network Create a corresponding StorageNFSSubnet on the neutron-shared provider network. Ensure that the subnet is the same as the storage_nfs network definition in the network_data.yml file and ensure that the allocation range for the StorageNFS subnet and the corresponding undercloud subnet do not overlap. No gateway is required because the StorageNFS subnet is dedicated to serving NFS shares. Prerequisites The start and ending IP range for the allocation pool. The subnet IP range. 3.2.1. Configuring the shared provider StorageNFS IPv4 network Create a corresponding StorageNFSSubnet on the neutron-shared IPv4 provider network. Procedure Log in to an overcloud node. Source your overcloud credentials. Use the example command to provision the network and make the following updates: Replace the start=172.17.0.4,end=172.17.0.250 IP values with the IP values for your network. Replace the 172.17.0.0/20 subnet range with the subnet range for your network. 3.2.2. Configuring the shared provider StorageNFS IPv6 network Create a corresponding StorageNFSSubnet on the neutron-shared IPv6 provider network. Procedure Log in to an overcloud node. Use the sample command to provision the network, updating values as needed. Replace the fd00:fd00:fd00:7000::/64 subnet range with the subnet range for your network. 3.3. Configuring a default share type You can use the Shared File Systems service (manila) to define share types that you can use to create shares with specific settings. Share types work like Block Storage volume types. Each type has associated settings, for example, extra specifications. When you invoke the type during share creation the settings apply to the shared file system. With Red Hat OpenStack Platform (RHOSP) director, you must create a default share type before you open the cloud for users to access. For CephFS with NFS, use the manila type-create command: For more information about share types, see Creating a share type in the Storage Guide .
[ "[stack@undercloud ~]USD source ~/overcloudrc", "(overcloud) [stack@undercloud-0 ~]USD openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70", "[stack@undercloud-0 ~]USD openstack subnet create --allocation-pool start=172.17.0.4,end=172.17.0.250 --dhcp --network StorageNFS --subnet-range 172.17.0.0/20 --gateway none StorageNFSSubnet", "[stack@undercloud-0 ~]USD openstack subnet create --ip-version 6 --dhcp --network StorageNFS --subnet-range fd00:fd00:fd00:7000::/64 --gateway none --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode dhcpv6-stateful StorageNFSSubnet -f yaml", "manila type-create default false" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/assembly_cephfs-post-deployment-configuration
Chapter 11. Configuring alert notifications
Chapter 11. Configuring alert notifications In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems. 11.1. Sending notifications to external systems In OpenShift Container Platform 4.11, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 11.1.1. Configuring alert receivers You can configure alert receivers to ensure that you learn about important issues with your cluster. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure In the Administrator perspective, navigate to Administration Cluster Settings Configuration Alertmanager . Note Alternatively, you can navigate to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Select Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver Name and choose a Receiver Type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Select Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Select Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Choose whether TLS is required. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors will be sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver: Add routing label names and values in the Routing Labels section of the form. Select Regular Expression if want to use a regular expression. Select Add Label to add further routing labels. Select Create to create the receiver. 11.2. Additional resources Monitoring overview Managing alerts
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/configuring-alert-notifications
Chapter 5. Configuring VLAN tagging
Chapter 5. Configuring VLAN tagging A Virtual Local Area Network (VLAN) is a logical network within a physical network. The VLAN interface tags packets with the VLAN ID as they pass through the interface, and removes tags of returning packets. You create VLAN interfaces on top of another interface, such as Ethernet, bond, team, or bridge devices. These interfaces are called the parent interface . Red Hat Enterprise Linux provides administrators different options to configure VLAN devices. For example: Use nmcli to configure VLAN tagging using the command line. Use the RHEL web console to configure VLAN tagging using a web browser. Use nmtui to configure VLAN tagging in a text-based user interface. Use the nm-connection-editor application to configure connections in a graphical interface. Use nmstatectl to configure connections through the Nmstate API. Use RHEL system roles to automate the VLAN configuration on one or multiple hosts. 5.1. Configuring VLAN tagging by using nmcli You can configure Virtual Local Area Network (VLAN) tagging on the command line using the nmcli utility. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by setting the ipv4.method=disable and ipv6.method=ignore options while creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch, the host is connected to, is configured to support VLAN tags. For details, see the documentation of your switch. Procedure Display the network interfaces: Create the VLAN interface. For example, to create a VLAN interface named vlan10 that uses enp1s0 as its parent interface and that tags packets with VLAN ID 10 , enter: Note that the VLAN must be within the range from 0 to 4094 . By default, the VLAN connection inherits the maximum transmission unit (MTU) from the parent interface. Optionally, set a different MTU value: Configure the IPv4 settings: If you plan to use this VLAN device as a port of other devices, enter: To use DHCP, no action is required. To set a static IPv4 address, network mask, default gateway, and DNS server to the vlan10 connection, enter: Configure the IPv6 settings: If you plan to use this VLAN device as a port of other devices, enter: To use stateless address autoconfiguration (SLAAC), no action is required. To set a static IPv6 address, network mask, default gateway, and DNS server to the vlan10 connection, enter: Activate the connection: Verification Verify the settings: Additional resources nm-settings(5) man page on your system 5.2. Configuring nested VLANs by using nmcli 802.1ad is a protocol used for Virtual Local Area Network (VLAN) tagging. It is also known as Q-in-Q tagging. You can use this technology to create multiple VLAN tags within a single Ethernet frame to achieve the following benefits: Increased network scalability by creating multiple isolated network segments within a VLAN. This enables you to segment and organize large networks into smaller, manageable units. Improved traffic management by isolating and controlling different types of network traffic. This can improve the network performance and reduce network congestion. Efficient resource utilization by enabling the creation of smaller, more targeted network segments. Enhanced security by isolating network traffic and reducing the risk of unauthorized access to sensitive data. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by setting the ipv4.method=disable and ipv6.method=ignore options while creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch, the host is connected to, is configured to support VLAN tags. For details, see the documentation of your switch. Procedure Display the physical network devices: Create the base VLAN interface. For example, to create a base VLAN interface named vlan10 that uses enp1s0 as its parent interface and that tags packets with VLAN ID 10 , enter: Note that the VLAN must be within the range from 0 to 4094 . By default, the VLAN connection inherits the maximum transmission unit (MTU) from the parent interface. Optionally, set a different MTU value: Create the nested VLAN interface on top of the base VLAN interface: This command creates a new VLAN connection with a name of vlan10.20 and a VLAN ID of 20 on the parent VLAN connection vlan10 . The dev option specifies the parent network device. In this case it is enp1s0.10 . The vlan.protocol option specifies the VLAN encapsulation protocol. In this case it is 802.1ad (Q-in-Q). Configure the IPv4 settings of the nested VLAN interface: To use DHCP, no action is required. To set a static IPv4 address, network mask, default gateway, and DNS server to the vlan10.20 connection, enter: Configure the IPv6 settings of the nested VLAN interface: To use stateless address autoconfiguration (SLAAC), no action is required. To set a static IPv4 address, network mask, default gateway, and DNS server to the vlan10 connection, enter: Activate the profile: Verification Verify the configuration of the nested VLAN interface: Additional resources nm-settings(5) man page on your system 5.3. Configuring VLAN tagging by using the RHEL web console You can configure VLAN tagging if you prefer to manage network settings using a web browser-based interface in the RHEL web console. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by disabling the IPv4 and IPv6 protocol creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch, the host is connected to, is configured to support VLAN tags. For details, see the documentation of your switch. You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Select the Networking tab in the navigation on the left side of the screen. Click Add VLAN in the Interfaces section. Select the parent device. Enter the VLAN ID. Enter the name of the VLAN device or keep the automatically-generated name. Click Apply . By default, the VLAN device uses a dynamic IP address. If you want to set a static IP address: Click the name of the VLAN device in the Interfaces section. Click Edit to the protocol you want to configure. Select Manual to Addresses , and enter the IP address, prefix, and default gateway. In the DNS section, click the + button, and enter the IP address of the DNS server. Repeat this step to set multiple DNS servers. In the DNS search domains section, click the + button, and enter the search domain. If the interface requires static routes, configure them in the Routes section. Click Apply Verification Select the Networking tab in the navigation on the left side of the screen, and check if there is incoming and outgoing traffic on the interface: 5.4. Configuring VLAN tagging by using nmtui The nmtui application provides a text-based user interface for NetworkManager. You can use nmtui to configure VLAN tagging on a host without a graphical interface. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the then incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by setting the ipv4.method=disable and ipv6.method=ignore options while creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch the host is connected to is configured to support VLAN tags. For details, see the documentation of your switch. Procedure If you do not know the network device name on which you want configure VLAN tagging, display the available devices: Start nmtui : Select Edit a connection , and press Enter . Press Add . Select VLAN from the list of network types, and press Enter . Optional: Enter a name for the NetworkManager profile to be created. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Enter the VLAN device name to be created into the Device field. Enter the name of the device on which you want to configure VLAN tagging into the Parent field. Enter the VLAN ID. The ID must be within the range from 0 to 4094 . Depending on your environment, configure the IP address settings in the IPv4 configuration and IPv6 configuration areas accordingly. For this, press the button to these areas, and select: Disabled , if this VLAN device does not require an IP address or you want to use it as a port of other devices. Automatic , if a DHCP server or stateless address autoconfiguration (SLAAC) dynamically assigns an IP address to the VLAN device. Manual , if the network requires static IP address settings. In this case, you must fill further fields: Press Show to the protocol you want to configure to display additional fields. Press Add to Addresses , and enter the IP address and the subnet mask in Classless Inter-Domain Routing (CIDR) format. If you do not specify a subnet mask, NetworkManager sets a /32 subnet mask for IPv4 addresses and /64 for IPv6 addresses. Enter the address of the default gateway. Press Add to DNS servers , and enter the DNS server address. Press Add to Search domains , and enter the DNS search domain. Figure 5.1. Example of a VLAN connection with static IP address settings Press OK to create and automatically activate the new connection. Press Back to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Verify the settings: 5.5. Configuring VLAN tagging by using nm-connection-editor You can configure Virtual Local Area Network (VLAN) tagging in a graphical interface using the nm-connection-editor application. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The switch, the host is connected, to is configured to support VLAN tags. For details, see the documentation of your switch. Procedure Open a terminal, and enter nm-connection-editor : Click the + button to add a new connection. Select the VLAN connection type, and click Create . On the VLAN tab: Select the parent interface. Select the VLAN id. Note that the VLAN must be within the range from 0 to 4094 . By default, the VLAN connection inherits the maximum transmission unit (MTU) from the parent interface. Optionally, set a different MTU value. Optional: Set the name of the VLAN interface and further VLAN-specific options. Configure the IP address settings on both the IPv4 Settings and IPv6 Settings tabs: If you plan to use this bridge device as a port of other devices, set the Method field to Disabled . To use DHCP, leave the Method field at its default, Automatic (DHCP) . To use static IP settings, set the Method field to Manual and fill the fields accordingly: Click Save . Close nm-connection-editor . Verification Verify the settings: Additional resources Configuring NetworkManager to avoid using a specific profile to provide a default gateway 5.6. Configuring VLAN tagging by using nmstatectl Use the nmstatectl utility to configure Virtual Local Area Network VLAN through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Depending on your environment, adjust the YAML file accordingly. For example, to use different devices than Ethernet adapters in the VLAN, adapt the base-iface attribute and type attributes of the ports you use in the VLAN. Prerequisites To use Ethernet devices as ports in the VLAN, the physical or virtual Ethernet devices must be installed on the server. The nmstate package is installed. Procedure Create a YAML file, for example ~/create-vlan.yml , with the following content: --- interfaces: - name: vlan10 type: vlan state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false vlan: base-iface: enp1s0 id: 10 - name: enp1s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.254 -hop-interface: vlan10 - destination: ::/0 -hop-address: 2001:db8:1::fffe -hop-interface: vlan10 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb These settings define a VLAN with ID 10 that uses the enp1s0 device. As the child device, the VLAN connection has the following settings: A static IPv4 address - 192.0.2.1 with the /24 subnet mask A static IPv6 address - 2001:db8:1::1 with the /64 subnet mask An IPv4 default gateway - 192.0.2.254 An IPv6 default gateway - 2001:db8:1::fffe An IPv4 DNS server - 192.0.2.200 An IPv6 DNS server - 2001:db8:1::ffbb A DNS search domain - example.com Apply the settings to the system: Verification Display the status of the devices and connections: Display all settings of the connection profile: Display the connection settings in YAML format: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory 5.7. Configuring VLAN tagging by using the network RHEL system role If your network uses Virtual Local Area Networks (VLANs) to separate network traffic into logical networks, create a NetworkManager connection profile to configure VLAN tagging. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure VLAN tagging and, if a connection profile for the VLAN's parent device does not exist, the role can create it as well. Note If the VLAN device requires an IP address, default gateway, and DNS settings, configure them on the VLAN device and not on the parent device. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates two connection profiles: One for the parent Ethernet device and one for the VLAN device. dhcp4: <value> If set to yes , automatic IPv4 address assignment from DHCP, PPP, or similar services is enabled. Disable the IP address configuration on the parent device. auto6: <value> If set to yes , IPv6 auto-configuration is enabled. In this case, by default, NetworkManager uses Router Advertisements and, if the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. Disable the IP address configuration on the parent device. parent: <parent_device> Sets the parent device of the VLAN connection profile. In the example, the parent is the Ethernet interface. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify the VLAN settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory
[ "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected enp1s0 bridge0 bridge connected bridge0 bond0 bond connected bond0", "nmcli connection add type vlan con-name vlan10 ifname vlan10 vlan.parent enp1s0 vlan.id 10", "nmcli connection modify vlan10 ethernet.mtu 2000", "nmcli connection modify vlan10 ipv4.method disabled", "nmcli connection modify vlan10 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.method manual", "nmcli connection modify vlan10 ipv6.method disabled", "nmcli connection modify vlan10 ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.method manual", "nmcli connection up vlan10", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet connected enp1s0", "nmcli connection add type vlan con-name vlan10 dev enp1s0 vlan.id 10", "nmcli connection modify vlan10 ethernet.mtu 2000", "nmcli connection add type vlan con-name vlan10.20 dev enp1s0.10 id 20 vlan.protocol 802.1ad", "nmcli connection modify vlan10.20 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200", "nmcli connection modify vlan10 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.method manual", "nmcli connection up vlan10.20", "ip -d addr show enp1s0.10.20 10: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:d2:74:3e brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 0 maxmtu 65535 vlan protocol 802.1ad id 20 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0.10.20 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::ce3b:84c5:9ef8:d0e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --", "nmtui", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nm-connection-editor", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:d5:e0:fb brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "--- interfaces: - name: vlan10 type: vlan state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false vlan: base-iface: enp1s0 id: 10 - name: enp1s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: vlan10 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: vlan10 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-vlan.yml", "nmcli device status DEVICE TYPE STATE CONNECTION vlan10 vlan connected vlan10", "nmcli connection show vlan10 connection.id: vlan10 connection.uuid: 1722970f-788e-4f81-bd7d-a86bf21c9df5 connection.stable-id: -- connection.type: vlan connection.interface-name: vlan10", "nmstatectl show vlan0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -d addr show enp1s0.10' managed-node-01.example.com | CHANGED | rc=0 >> 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-vlan-tagging_configuring-and-managing-networking
Chapter 6. Managing user passwords in IdM
Chapter 6. Managing user passwords in IdM 6.1. Who can change IdM user passwords and how Regular users without the permission to change other users' passwords can change only their own personal password. The new password must meet the IdM password policies applicable to the groups of which the user is a member. For details on configuring password policies, see Defining IdM password policies . Administrators and users with password change rights can set initial passwords for new users and reset passwords for existing users. These passwords: Do not have to meet the IdM password policies. Expire after the first successful login. When this happens, IdM prompts the user to change the expired password immediately. To disable this behavior, see Enabling password reset in IdM without prompting the user for a password change at the login . Note The LDAP Directory Manager (DM) user can change user passwords using LDAP tools. The new password can override any IdM password policies. Passwords set by DM do not expire after the first login. 6.2. Changing your user password in the IdM Web UI As an Identity Management (IdM) user, you can change your user password in the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. Procedure In the upper right corner, click User name Change password . Figure 6.1. Resetting Password Enter the current and new passwords. 6.3. Resetting another user's password in the IdM Web UI As an administrative user of Identity Management (IdM), you can change passwords for other users in the IdM Web UI. Prerequisites You are logged in to the IdM Web UI as an administrative user. Procedure Select Identity Users . Click the name of the user to edit. Click Actions Reset password . Figure 6.2. Resetting Password Enter the new password, and click Reset Password . Figure 6.3. Confirming New Password 6.4. Resetting the Directory Manager user password If you lose the Identity Management (IdM) Directory Manager password, you can reset it. Prerequisites You have root access to an IdM server. Procedure Generate a new password hash by using the pwdhash command. For example: By specifying the path to the Directory Server configuration, you automatically use the password storage scheme set in the nsslapd-rootpwstoragescheme attribute to encrypt the new password. On every IdM server in your topology, execute the following steps: Stop all IdM services installed on the server: Edit the /etc/dirsrv/IDM-EXAMPLE-COM/dse.ldif file and set the nsslapd-rootpw attribute to the value generated by the pwdhash command: Start all IdM services installed on the server: 6.5. Changing your user password or resetting another user's password in IdM CLI You can change your user password using the Identity Management (IdM) command-line interface (CLI). If you are an administrative user, you can use the CLI to reset another user's password. Prerequisites You have obtained a ticket-granting ticket (TGT) for an IdM user. If you are resetting another user's password, you must have obtained a TGT for an administrative user in IdM. Procedure Enter the ipa user-mod command with the name of the user and the --password option. The command will prompt you for the new password. Note You can also use the ipa passwd idm_user command instead of ipa user-mod . 6.6. Enabling password reset in IdM without prompting the user for a password change at the login By default, when an administrator resets another user's password, the password expires after the first successful login. As IdM Directory Manager, you can specify the following privileges for individual IdM administrators: They can perform password change operations without requiring users to change their passwords subsequently on their first login. They can bypass the password policy so that no strength or history enforcement is applied. Warning Bypassing the password policy can be a security threat. Exercise caution when selecting users to whom you grant these additional privileges. Prerequisites You know the Directory Manager password. Procedure On every Identity Management (IdM) server in the domain, make the following changes: Enter the ldapmodify command to modify LDAP entries. Specify the name of the IdM server and the 389 port and press Enter: Enter the Directory Manager password. Enter the distinguished name for the ipa_pwd_extop password synchronization entry and press Enter: Specify the modify type of change and press Enter: Specify what type of modification you want LDAP to execute and to which attribute. Press Enter: Specify the administrative user accounts in the passSyncManagersDNs attribute. The attribute is multi-valued. For example, to grant the admin user the password resetting powers of Directory Manager: Press Enter twice to stop editing the entry. The whole procedure looks as follows: The admin user, listed under passSyncManagerDNs , now has the additional privileges. 6.7. Checking if an IdM user's account is locked As an Identity Management (IdM) administrator, you can check if an IdM user's account is locked. For that, you must compare a user's maximum allowed number of failed login attempts with the number of the user's actual failed logins. Prerequisites You have obtained the ticket-granting ticket (TGT) of an administrative user in IdM. Procedure Display the status of the user account to see the number of failed logins: Display the number of allowed login attempts for a particular user: Log in to the IdM Web UI as IdM administrator. Open the Identity Users Active users tab. Click the user name to open the user settings. In the Password policy section, locate the Max failures item. Compare the number of failed logins as displayed in the output of the ipa user-status command with the Max failures number displayed in the IdM Web UI. If the number of failed logins equals that of maximum allowed login attempts, the user account is locked. Additional resources Unlocking user accounts after password failures in IdM 6.8. Unlocking user accounts after password failures in IdM If a user attempts to log in using an incorrect password a certain number of times, Identity Management (IdM) locks the user account, which prevents the user from logging in. For security reasons, IdM does not display any warning message that the user account has been locked. Instead, the CLI prompt might continue asking the user for a password again and again. IdM automatically unlocks the user account after a specified amount of time has passed. Alternatively, you can unlock the user account manually with the following procedure. Prerequisites You have obtained the ticket-granting ticket of an IdM administrative user. Procedure To unlock a user account, use the ipa user-unlock command. After this, the user can log in again. Additional resources Checking if an IdM user's account is locked 6.9. Enabling the tracking of last successful Kerberos authentication for users in IdM For performance reasons, Identity Management (IdM) running in Red Hat Enterprise Linux 8 does not store the time stamp of the last successful Kerberos authentication of a user. As a consequence, certain commands, such as ipa user-status , do not display the time stamp. Prerequisites You have obtained the ticket-granting ticket (TGT) of an administrative user in IdM. You have root access to the IdM server on which you are executing the procedure. Procedure Display the currently enabled password plug-in features: The output shows that the KDC:Disable Last Success plug-in is enabled. The plug-in hides the last successful Kerberos authentication attempt from being visible in the ipa user-status output. Add the --ipaconfigstring= feature parameter for every feature to the ipa config-mod command that is currently enabled, except for KDC:Disable Last Success : This command enables only the AllowNThash plug-in. To enable multiple features, specify the --ipaconfigstring= feature parameter separately for each feature. Restart IdM:
[ "pwdhash -D /etc/dirsrv/slapd-IDM-EXAMPLE-COM password {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW", "ipactl stop", "nsslapd-rootpw: {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW", "ipactl start", "ipa user-mod idm_user --password Password: Enter Password again to verify: -------------------- Modified user \"idm_user\" --------------------", "ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password:", "dn: cn=ipa_pwd_extop,cn=plugins,cn=config", "changetype: modify", "add: passSyncManagersDNs", "passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com", "ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password: dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com", "ipa user-status example_user ----------------------- Account disabled: False ----------------------- Server: idm.example.com Failed logins: 8 Last successful authentication: N/A Last failed authentication: 20220229080317Z Time now: 2022-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------", "ipa user-unlock idm_user ----------------------- Unlocked account \"idm_user\" -----------------------", "ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success", "ipa config-mod --ipaconfigstring='AllowNThash'", "ipactl restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-passwords-in-idm_managing-users-groups-hosts
Deploying OpenShift Data Foundation on VMware vSphere
Deploying OpenShift Data Foundation on VMware vSphere Red Hat OpenShift Data Foundation 4.15 Instructions on deploying OpenShift Data Foundation using VMware vSphere infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on VMware vSphere clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_vmware_vsphere/index