title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 8. Known issues
Chapter 8. Known issues This section describes known issues in AMQ Broker 7.11. ENTMQBR-8106 - AMQ Broker Drainer pod doesn't function properly after changing MessageMigration in CR You cannot change the value of the messageMigration attribute in a running broker deployment. To work around this issue, you must set the required value for the messageMigration attribute in a new ActiveMQ Artemis CR and create a new broker deployment. ENTMQBR-8166 - Self-signed certificate with UseClientAuth=true prevents communication of Operator with Jolokia If the useClientAuth attribute is set to true in the console section of the ActiveMQ Artemis CR, the Operator is unable to configure certain features, for example, create addresses, on the broker. In the Operator log, you see an error message that ends with remote error: tls: bad certificate . ENTMQBR-7385 - Message flops around the federation queue on slow consumers If the local application consumers are very slow or unable to consume message, a message can be sent back-and-forth over a federated connection a large number of times before it is finally consumed by an application consumer. ENTMQBR-7820 - [Operator] Supported versions listed in 7.11.0 OPR1 operator log are incorrect The Operator logs lists support for the following AMQ Broker image versions : 7.10.0 7.10.1 7.10.2 7.11.0 7.8.1 7.8.2 7.8.3 7.9.0 7.9.1 7.9.2 7.9.3 7.9.4. The Operator actually supports AMQ Broker image versions beginning with 7.10.0. ENTMQBR-7359 - Change to current handling of credential secret with 7.10.0 Operator The Operator stores the administrator username and password for connecting to the broker in a secret. The default secret name is in the form <custom-resource-name> -credentials-secret . You can create a secret manually or allow the Operator to create a secret. If the adminUser and adminPassword attributes are configured in a Custom Resource prior to 7.10.0, the Operator updates a manually-created secret with the values of these attributes. Starting in 7.10.0, the Operator no longer updates a secret that was created manually. Therefore, if you change the values of the adminUser and adminPassword attributes in the CR, you must either: Update the secret with the new username and password Delete the secret and allow the Operator to create a secret. When the Operator creates a secret, it adds the values of the adminUser and adminPassword attributes if these are specified in the CR. If these attributes are not in the CR, the Operator generates random credentials for the secret. ENTMQBR-7111 - 7.10 versions of operator tend to remove StatefulSet during upgrade If you are upgrading to or from AMQ Broker Operator 7.10.0, the new Operator automatically deletes the existing StatefulSet for each deployment during the reconciliation process. When the Operator deletes the StatefulSet, the existing broker pods are deleted, which causes a temporary broker outage. You can work around this issue by running the following command to manually delete the StatefulSet and orphan the running pods before the Operator gets to delete the StatefulSet: oc delete statefulset <statefulset-name> --cascade=orphan Manually deleting the StatefulSet during the upgrade process allows the new Operator to reconcile the StatefulSet without deleting the running pods. For more information, see Upgrading the Operator using OperatorHub in Deploying AMQ Broker on OpenShift . ENTMQBR-6473 - Incompatible configuration due to schema URL change When you try to use a broker instance configuration from a release with a version 7.9 or 7.10 instance, an incompatible configuration as a result of a schema URL change causes the broker to crash. To work around this issue, update the schema URL in the relevant configuration files as outlined in Upgrading from 7.9.0 to 7.10.0 on Linux . ENTMQBR-4813 AsynchronousCloseException with large messages and multiple C++ subscribers If multiple C++ Publisher clients that uses the AMQP protocol are running on the same host as subscribers and the broker, and a publisher sends a large message, one of the subscribers crashes. ENTMQBR-5749 - Remove unsupported operators that are visible in OperatorHub Only the Operators and Operator channels mentioned in Deploying the Operator from OperatorHub are supported. For technical reasons associated with Operator publication, other Operator and channels are visible in the OperatorHub and should be ignored. For reference, the following list shows which Operators are visible, but not supported: Red Hat Integration - AMQ Broker LTS - all channels Red Hat Integration - AMQ Broker - alpha, current, and current-76 ENTMQBR-569 - Conversion of IDs from OpenWire to AMQP results in sending IDs as binary When communicating cross-protocol from an A-MQ 6 OpenWire client to an AMQP client, additional information is encoded in the application message properties. This is benign information used internally by the broker and can be ignored. ENTMQBR-655 - [AMQP] Unable to send message when populate-validated-user is enabled The configuration option populate-validated-user is not supported for messages produced using the AMQP protocol. ENTMQBR-1875 - [AMQ 7, ha, replicated store] backup broker appear not to go "live" or shutdown after - ActiveMQIllegalStateException errorType=ILLEGAL_STATE message=AMQ119026: Backup Server was not yet in sync with live Removing the paging disk of a primary broker while a backup broker is trying to sync with the primary broker causes the primary to fail. In addition, the backup broker cannot become live because it continues trying to sync with the primary broker. ENTMQBR-2068 - some messages received but not delivered during HA fail-over, fail-back scenario Currently, if a broker fails over to its backup while an OpenWire client is sending messages, messages being delivered to the broker when failover occurs could be lost. To work around this issue, ensure that the broker persists the messages before acknowledging them. ENTMQBR-3331 - Stateful set controller can't recover from CreateContainerError, blocking the operator If the AMQ Broker Operator creates a stateful set from a Custom Resource (CR) that has a configuration error, the stateful set controller is unable to roll out the updated stateful set when the error is resolved. For example, a misspelling in the value of the image attribute in your main broker CR causes the status of the first Pod created by the stateful set controller to remain Pending . If you then fix the misspelling and apply the CR changes, the AMQ Broker Operator updates the stateful set. However, a Kubernetes known issue prevents the stateful set controller from rolling out the updated stateful set. The controller waits indefinitely for the Pod that has a Pending status to become Ready , so the new Pods are not deployed. To work around this issue, you must delete the Pod that has a Pending status to allow the stateful set controller to deploy the new Pods. To check which Pod has a Pending status, use the following command: oc get pods --field-selector=status.phase=Pending . To delete a Pod, use the oc delete pod <pod name> command. ENTMQBR-3846 - MQTT client does not reconnect on broker restart When you restart a broker, or a broker fails over, the active broker does not restore connections for previously-connected MQTT clients. To work around this issue, to reconnect an MQTT client, you need to manually call the subscribe() method on the client. ENTMQBR-4127 - AMQ Broker Operator: Route name generated by Operator might be too long for OpenShift For each broker Pod in an Operator-based deployment, the default name of the Route that the Operator creates for access to the AMQ Broker management console includes the name of the Custom Resource (CR) instance, the name of the OpenShift project, and the name of the OpenShift cluster. For example, my-broker-deployment-wconsj-0-svc-rte-my-openshift-project.my-openshift-domain . If some of these names are long, the default Route name might exceed the limit of 63 characters that OpenShift enforces. In this case, in the OpenShift Container Platform web console, the Route shows a status of Rejected . To work around this issue, use the OpenShift Container Platform web console to manually edit the name of the Route. In the console, click the Route. On the Actions drop-down menu in the top-right corner, select Edit Route . In the YAML editor, find the spec.host property and edit the value. ENTMQBR-4140 - AMQ Broker Operator: Installation becomes unusable if storage.size is improperly specified If you configure the storage.size property of a Custom Resource (CR) instance to specify the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the Operator installation becomes unusable if you do not specify this value properly. For example, suppose that you set the value of storage.size to 1 (that is, without specifying a unit). In this case, the Operator cannot use the CR to create a broker deployment. In addition, even if you remove the CR and deploy a new version with storage.size specified correctly, the Operator still cannot use this CR to create a deployment as expected. To work around this issue, first stop the Operator. In the OpenShift Container Platform web console, click Deployments . For the Pod that corresponds to the AMQ Broker Operator, click the More options menu (three vertical dots). Click Edit Pod Count and set the value to 0 . When the Operator Pod has stopped, create a new version of the CR with storage.size correctly specified. Then, to restart the Operator, click Edit Pod Count again and set the value back to 1 . ENTMQBR-4141 - AMQ Broker Operator: Increasing Persistent Volume size requires manual involvement even after recreating Stateful Set If you try to increase the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the change does not take effect without further manual steps. For example, suppose that you configure the storage.size property of a Custom Resource (CR) instance to specify an initial size for the PVC. If you modify the CR to specify a different value of storage.size , the existing brokers continue to use the original PVC size. This is the case even if you scale the deployment down to zero brokers and then back up to the original number. However, if you scale the size of the deployment up to add additional brokers, the new brokers use the new PVC size. To work around this issue, and ensure that all brokers in the deployment use the same PVC size, use the OpenShift Container Platform web console to expand the PVC size used by the deployment. In the console, click Storage Persistent Volume Claims . Click your deployment. On the Actions drop-down menu in the top-right corner, select Expand PVC and enter a new value.
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/release_notes_for_red_hat_amq_broker_7.11/known
Chapter 50. Managing public SSH keys for users and hosts
Chapter 50. Managing public SSH keys for users and hosts SSH (Secure Shell) is a protocol which provides secure communications between two systems using a client-server architecture. SSH allows users to log in to server host systems remotely and also allows one host machine to access another machine. 50.1. About the SSH key format IdM accepts the following two SSH key formats: OpenSSH-style key Raw RFC 4253-style key Note that IdM automatically converts RFC 4253-style keys into OpenSSH-style keys before saving them into the IdM LDAP server. The IdM server can identify the type of key, such as an RSA or DSA key, from the uploaded key blob. In a key file such as ~/.ssh/known_hosts , a key entry is identified by the hostname and IP address of the server, its type, and the key. For example: This is different from a user public key entry, which has the elements in the order type key== comment : A key file, such as id_rsa.pub , consists of three parts: the key type, the key, and an additional comment or identifier. When uploading a key to IdM, you can upload all three key parts or only the key. If you only upload the key, IdM automatically identifies the key type, such as RSA or DSA, from the uploaded key. If you use the host public key entry from the ~/.ssh/known_hosts file, you must reorder it to match the format of a user key, type key== comment : IdM can determine the key type automatically from the content of the public key. The comment is optional, to make identifying individual keys easier. The only required element is the public key blob. IdM uses public keys stored in the following OpenSSH-style files: Host public keys are in the known_hosts file. User public keys are in the authorized_keys file. Additional resources See RFC 4716 See RFC 4253 50.2. About IdM and OpenSSH During an IdM server or client installation, as part of the install script: An OpenSSH server and client is configured on the IdM client machine. SSSD is configured to store and retrieve user and host SSH keys in cache. This allows IdM to serve as a universal and centralized repository of SSH keys. If you enable the SSH service during the client installation, an RSA key is created when the SSH service is started for the first time. Note When you run the ipa-client-install install script to add the machine as an IdM client, the client is created with two SSH keys, RSA and DSA. As part of the installation, you can configure the following: Configure OpenSSH to automatically trust the IdM DNS records where the key fingerprints are stored using the --ssh-trust-dns option. Disable OpenSSH and prevent the install script from configuring the OpenSSH server using the --no-sshd option. Prevent the host from creating DNS SSHFP records with its own DNS entries using the --no-dns-sshfp option. If you do not configure the server or client during installation, you can manually configure SSSD later. For information on how to manually configure SSSD, see Configuring SSSD to Provide a Cache for the OpenSSH Services . Note that caching SSH keys by SSSD requires administrative privileges on the local machines. 50.3. Generating SSH keys You can generate an SSH key by using the OpenSSH ssh-keygen utility. Procedure To generate an RSA SSH key, run the following command: Note if generating a host key, replace [email protected] with the required hostname, such as server.example.com,1.2.3.4 . Specify the file where you are saving the key or press enter to accept the displayed default location. Note if generating a host key, save the key to a different location than the user's ~/.ssh/ directory so you do not overwrite any existing keys. for example, /home/user/.ssh/host_keys . Specify a passphrase for your private key or press enter to leave the passphrase blank. To upload this SSH key, use the public key string stored in the displayed file. 50.4. Managing public SSH keys for hosts OpenSSH uses public keys to authenticate hosts. One machine attempts to access another machine and presents its key pair. The first time the host authenticates, the administrator on the target machine has to approve the request manually. The machine then stores the host's public key in a known_hosts file. Any time that the remote machine attempts to access the target machine again, the target machine checks its known_hosts file and then grants access automatically to approved hosts. 50.4.1. Uploading SSH keys for a host using the IdM Web UI Identity Management allows you to upload a public SSH key to a host entry. OpenSSH uses public keys to authenticate hosts. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure You can retrieve the key for your host from a ~/.ssh/known_hosts file. For example: You can also generate a host key. See Generating SSH keys . Copy the public key from the key file. The full key entry has the form host name,IP type key== . Only the key== is required, but you can store the entire entry. To use all elements in the entry, rearrange the entry so it has the order type key== [host name,IP] . Log into the IdM Web UI. Go to the Identity>Hosts tab. Click the name of the host to edit. In the Host Settings section, click the SSH public keys Add button. Paste the public key for the host into the SSH public key field. Click Set . Click Save at the top of the IdM Web UI window. Verification Under the Hosts Settings section, verify the key is listed under SSH public keys . 50.4.2. Uploading SSH keys for a host using the IdM CLI Identity Management allows you to upload a public SSH key to a host entry. OpenSSH uses public keys to authenticate hosts. Host SSH keys are added to host entries in IdM, when the host is created using host-add or by modifying the entry later. Note RSA and DSA host keys are created by the ipa-client-install command, unless the SSH service is explicitly disabled in the installation script. Prerequisites Administrator privileges for managing IdM or User Administrator role. Procedure Run the host-mod command with the --sshpubkey option to upload the base64-encoded public key to the host entry. Because adding a host key changes the DNS Secure Shell fingerprint (SSHFP) record for the host, use the --updatedns option to update the host's DNS entry. For example: A real key also usually ends with an equal sign (=) but is longer. To upload more than one key, enter multiple --sshpubkey command-line parameters: Note A host can have multiple public keys. After uploading the host keys, configure SSSD to use Identity Management as one of its identity domains and set up OpenSSH to use the SSSD tools for managing host keys, covered in Configuring SSSD to Provide a Cache for the OpenSSH Services . Verification Run the ipa host-show command to verify that the SSH public key is associated with the specified host: 50.4.3. Deleting SSH keys for a host using the IdM Web UI You can remove the host keys once they expire or are no longer valid. Follow the steps below to remove an individual host key by using the IdM Web UI. Prerequisites Administrator privileges for managing the IdM Web UI or Host Administrator role. Procedure Log into the IdM Web UI. Go to the Identity>Hosts tab. Click the name of the host to edit. Under the Host Settings section, click Delete to the SSH public key you want to remove. Click Save at the top of the page. Verification Under the Host Settings section, verify the key is no longer listed under SSH public keys . 50.4.4. Deleting SSH keys for a host using the IdM CLI You can remove the host keys once they expire or are no longer valid. Follow the steps below to remove an individual host key by using the IdM CLI. Prerequisites Administrator privileges for managing the IdM CLI or Host Administrator role. Procedure To delete all SSH keys assigned to a host account, add the --sshpubkey option to the ipa host-mod command without specifying any key: Note that it is good practice to use the --updatedns option to update the host's DNS entry. IdM determines the key type automatically from the key, if the type is not included in the uploaded key. Verification Run the ipa host-show command to verify that the SSH public key is no longer associated with the specified host: 50.5. Managing public SSH keys for users Identity Management allows you to upload a public SSH key to a user entry. The user who has access to the corresponding private SSH key can use SSH to log into an IdM machine without using Kerberos credentials. Note that users can still authenticate by providing their Kerberos credentials if they are logging in from a machine where their private SSH key file is not available. 50.5.1. Uploading SSH keys for a user using the IdM Web UI Identity Management allows you to upload a public SSH key to a user entry. The user who has access to the corresponding private SSH key can use SSH to log into an IdM machine without using Kerberos credentials. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log into the IdM Web UI. Go to the Identity>Users tab. Click the name of the user to edit. In the Account Settings section, click the SSH public keys Add button. Paste the Base 64-encoded public key string into the SSH public key field. Click Set . Click Save at the top of the IdM Web UI window. Verification Under the Accounts Settings section, verify the key is listed under SSH public keys . 50.5.2. Uploading SSH keys for a user using the IdM CLI Identity Management allows you to upload a public SSH key to a user entry. The user who has access to the corresponding private SSH key can use SSH to log into an IdM machine without using Kerberos credentials. Prerequisites Administrator privileges for managing the IdM CLI or User Administrator role. Procedure Run the ipa user-mod command with the --sshpubkey option to upload the base64-encoded public key to the user entry. Note in this example you upload the key type, the key, and the hostname identifier to the user entry. To upload multiple keys, use --sshpubkey multiple times. For example, to upload two SSH keys: To use command redirection and point to a file that contains the key instead of pasting the key string manually, use the following command: Verification Run the ipa user-show command to verify that the SSH public key is associated with the specified user: 50.5.3. Deleting SSH keys for a user using the IdM Web UI Follow this procedure to delete an SSH key from a user profile in the IdM Web UI. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log into the IdM Web UI. Go to the Identity>Users tab. Click the name of the user to edit. Under the Account Settings section, under SSH public key , click Delete to the key you want to remove. Click Save at the top of the page. Verification Under the Account Settings section, verify the key is no longer listed under SSH public keys . 50.5.4. Deleting SSH keys for a user using the IdM CLI Follow this procedure to delete an SSH key from a user profile by using the IdM CLI. Prerequisites Administrator privileges for managing the IdM CLI or User Administrator role. Procedure To delete all SSH keys assigned to a user account, add the --sshpubkey option to the ipa user-mod command without specifying any key: To only delete a specific SSH key or keys, use the --sshpubkey option to specify the keys you want to keep, omitting the key you are deleting. Verification Run the ipa user-show command to verify that the SSH public key is no longer associated with the specified user:
[ "host.example.com,1.2.3.4 ssh-rsa AAA...ZZZ==", "\"ssh-rsa ABCD1234...== ipaclient.example.com\"", "ssh-rsa AAA...ZZZ== host.example.com,1.2.3.4", "ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair.", "Enter file in which to save the key (/home/user/.ssh/id_rsa):", "Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:ONxjcMX7hJ5zly8F8ID9fpbqcuxQK+ylVLKDMsJPxGA [email protected] The key's randomart image is: +---[RSA 3072]----+ | ..o | | .o + | | E. . o = | | ..o= o . + | | +oS. = + o.| | . .o .* B =.+| | o + . X.+.= | | + o o.*+. .| | . o=o . | +----[SHA256]-----+", "server.example.com,1.2.3.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApvjBvSFSkTU0WQW4eOweeo0DZZ08F9Ud21xlLy6FOhzwpXFGIyxvXZ52+siHBHbbqGL5+14N7UvElruyslIHx9LYUR/pPKSMXCGyboLy5aTNl5OQ5EHwrhVnFDIKXkvp45945R7SKYCUtRumm0Iw6wq0XD4o+ILeVbV3wmcB1bXs36ZvC/M6riefn9PcJmh6vNCvIsbMY6S+FhkWUTTiOXJjUDYRLlwM273FfWhzHK+SSQXeBp/zIn1gFvJhSZMRi9HZpDoqxLbBB9QIdIw6U4MIjNmKsSI/ASpkFm2GuQ7ZK9KuMItY2AoCuIRmRAdF8iYNHBTXNfFurGogXwRDjQ==", "cat /home/user/.ssh/host_keys.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== server.example.com,1.2.3.4", "ipa host-mod --sshpubkey=\"ssh-rsa RjlzYQo==\" --updatedns host1.example.com", "--sshpubkey=\"RjlzYQo==\" --sshpubkey=\"ZEt0TAo==\"", "ipa host-show client.ipa.test SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa)", "kinit admin ipa host-mod --sshpubkey= --updatedns host1.example.com", "ipa host-show client.ipa.test Host name: client.ipa.test Platform: x86_64 Operating system: 4.18.0-240.el8.x86_64 Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Member of host-groups: ipaservers Roles: helpdesk Member of netgroups: test Member of Sudo rule: test2 Member of HBAC rule: test Keytab: True Managed by: client.ipa.test, server.ipa.test Users allowed to retrieve keytab: user1, user2, user3", "ipa user-mod user --sshpubkey=\"ssh-rsa AAAAB3Nza...SNc5dv== client.example.com\"", "--sshpubkey=\"AAAAB3Nza...SNc5dv==\" --sshpubkey=\"RjlzYQo...ZEt0TAo=\"", "ipa user-mod user --sshpubkey=\"USD(cat ~/.ssh/id_rsa.pub)\" --sshpubkey=\"USD(cat ~/.ssh/id_rsa2.pub)\"", "ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa) Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False", "ipa user-mod user --sshpubkey=", "ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-public-ssh-keys_managing-users-groups-hosts
Manage Red Hat Quay
Manage Red Hat Quay Red Hat Quay 3.12 Manage Red Hat Quay Red Hat OpenShift Documentation Team
[ "FEATURE_USER_METADATA: true", "sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret", "curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq", "{ \"config.yaml\": { \"AUTHENTICATION_TYPE\": \"Database\", \"AVATAR_KIND\": \"local\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DEFAULT_TAG_EXPIRATION\": \"2w\", \"EXTERNAL_TLS_TERMINATION\": false, \"FEATURE_ACTION_LOG_ROTATION\": false, \"FEATURE_ANONYMOUS_ACCESS\": true, \"FEATURE_APP_SPECIFIC_TOKENS\": true, . } }", "sudo podman run --rm -it --name quay_config -p 8080:8080 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret", "curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq", "{ \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } }", "curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } } http://quay-server:8080/api/v1/config/validate | jq", "curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { } } http://quay-server:8080/api/v1/config/validate | jq", "[ { \"FieldGroup\": \"Database\", \"Tags\": [ \"DB_URI\" ], \"Message\": \"DB_URI is required.\" }, { \"FieldGroup\": \"DistributedStorage\", \"Tags\": [ \"DISTRIBUTED_STORAGE_CONFIG\" ], \"Message\": \"DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location.\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME is required\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME must be of type Hostname\" }, { \"FieldGroup\": \"Redis\", \"Tags\": [ \"BUILDLOGS_REDIS\" ], \"Message\": \"BUILDLOGS_REDIS is required\" } ]", "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "cp ~/ssl.cert ~/ssl.key USDQUAY/config", "cd USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop quay", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret", "sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "sudo podman login quay-server.example.com", "Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority", "sudo podman login --tls-verify=false quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay", "cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----", "mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt", "sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.12.8 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller", "sudo podman restart 5a3e82c4a75f", "sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV", "cat ca.crt | base64 -w 0", "...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret", "custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms", "LOGS_MODEL: elasticsearch 1 LOGS_MODEL_CONFIG: producer: elasticsearch 2 elasticsearch_config: host: http://<host.elasticsearch.example>:<port> 3 port: 9200 4 access_key: <access_key> 5 secret_key: <secret_key> 6 use_ssl: True 7 index_prefix: <logentry> 8 aws_region: <us-east-1> 9", "kinesis_stream_config: stream_name: <kinesis_stream_name> 1 access_key: <aws_access_key> 2 secret_key: <aws_secret_key> 3 aws_region: <aws_region> 4", "curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false", "curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d", "LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7", "LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk_hec 1 splunk_hec_config: 2 host: prd-p-aaaaaq.splunkcloud.com 3 port: 8088 4 hec_token: 12345678-1234-1234-1234-1234567890ab 5 url_scheme: https 6 verify_ssl: False 7 index: quay 8 splunk_host: quay-dev 9 splunk_sourcetype: quay_logs 10", "oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret", "LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt", "{ \"log_data\": { \"kind\": \"authentication\", 1 \"account\": \"quayuser123\", 2 \"performer\": \"John Doe\", 3 \"repository\": \"projectQuay\", 4 \"ip\": \"192.168.1.100\", 5 \"metadata_json\": {...}, 6 \"datetime\": \"2024-02-06T12:30:45Z\" 7 } }", "psql -h <quay-server.example.com> -p 5432 -U <user_name> -d <database_name>", "psql (16.1, server 13.7) Type \"help\" for help.", "quay=> \\dt", "List of relations Schema | Name | Type | Owner --------+----------------------------+-------+---------- public | logentry | table | quayuser public | logentry2 | table | quayuser public | logentry3 | table | quayuser public | logentrykind | table | quayuser", "quay=> SELECT id, name FROM repository;", "id | name ----+--------------------- 3 | new_repository_name 6 | api-repo 7 | busybox", "SELECT * FROM logentry3 WHERE repository_id = <repository_id>;", "id | kind_id | account_id | performer_id | repository_id | datetime | ip | metadata_json 59 | 14 | 2 | 1 | 6 | 2024-05-13 15:51:01.897189 | 192.168.1.130 | {\"repo\": \"api-repo\", \"namespace\": \"test-org\"}", "{ \"log_data\": { \"id\": 59 1 \"kind_id\": \"14\", 2 \"account_id\": \"2\", 3 \"performer_id\": \"1\", 4 \"repository_id\": \"6\", 5 \"ip\": \"192.168.1.100\", 6 \"metadata_json\": {\"repo\": \"api-repo\", \"namespace\": \"test-org\"} 7 \"datetime\": \"2024-05-13 15:51:01.897189\" 8 } }", "mkdir /home/<user-name>/quay-poc/postgres-clairv4", "setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4", "sudo podman run -d --name postgresql-clairv4 -e POSTGRESQL_USER=clairuser -e POSTGRESQL_PASSWORD=clairpass -e POSTGRESQL_DATABASE=clair -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5433:5432 -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-13:1-109", "sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\"\" | psql -d clair -U postgres'", "CREATE EXTENSION", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret", "tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/", "mkdir /etc/opt/clairv4/config/", "cd /etc/opt/clairv4/config/", "http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: \"MTU5YzA4Y2ZkNzJoMQ==\" iss: [\"quay\"] tracing and metrics trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\" metrics: name: \"prometheus\"", "sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.12.8", "podman pull ubuntu:20.04", "sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04", "sudo podman run -d --name mirroring-worker -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.12.8 repomirror", "sudo podman run -d --name mirroring-worker -v USDQUAY/config:/conf/stack:Z -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt:Z registry.redhat.io/quay/quay-rhel8:v3.12.8 repomirror", "--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---", "curl <quay_endpoint>/health/instance {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}", "--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: dual-stack FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---", "curl --ipv4 <quay_endpoint> {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}", "curl --ipv6 <quay_endpoint> {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}", "AUTHENTICATION_TYPE: LDAP 1 LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com 2 LDAP_ADMIN_PASSWD: ABC123 3 LDAP_ALLOW_INSECURE_FALLBACK: false 4 LDAP_BASE_DN: 5 - dc=example - dc=com LDAP_EMAIL_ATTR: mail 6 LDAP_UID_ATTR: uid 7 LDAP_URI: ldap://<example_url>.com 8 LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) 9 LDAP_USER_RDN: 10 - ou=people LDAP_SECONDARY_USER_RDNS: 11 - ou=<example_organization_unit_one> - ou=<example_organization_unit_two> - ou=<example_organization_unit_three> - ou=<example_organization_unit_four>", "AUTHENTICATION_TYPE: LDAP FEATURE_RESTRICTED_USERS: true 1 LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) 2 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "AUTHENTICATION_TYPE: LDAP LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) 1 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "AUTHENTICATION_TYPE: OIDC AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Microsoft Entra ID 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6", "RHSSO_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_url> 4 SERVICE_NAME: <service_name> 5 SERVICE_ICON: <service_icon> 6 VERIFIED_EMAIL_CLAIM_NAME: <example_email_address> 7 PREFERRED_USERNAME_CLAIM_NAME: <preferred_username> 8 LOGIN_SCOPES: 9 - 'openid'", "AUTHENTICATION_TYPE: OIDC OIDC_LOGIN_CONFIG: CLIENT_ID: 1 CLIENT_SECRET: 2 OIDC_SERVER: 3 SERVICE_NAME: 4 PREFERRED_GROUP_CLAIM_NAME: 5 LOGIN_SCOPES: [ 'openid', '<example_scope>' ] 6 OIDC_DISABLE_USER_ENDPOINT: false 7 FEATURE_TEAM_SYNCING: true 8 FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: true 9 FEATURE_UI_V2: true", "This team is synchronized with a group in OIDC and its user membership is therefore read-only.", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": {}, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::123492922789:user/quay-user\" }, \"Action\": \"sts:AssumeRole\" } ] }", "DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test", "podman push <quay-server.example.com>/<organization_name>/busybox:test", "sudo podman run -d --rm -p 80:8080 -p 443:8443 -p 9091:9091 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "curl quay.example.com:9091/metrics", "oc get services -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h", "oc debug node/master-0 sh-4.4# curl 172.30.37.76:9091/metrics HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile=\"0\"} 4.0447e-05 go_gc_duration_seconds{quantile=\"0.25\"} 6.2203e-05", "HELP quay_user_rows number of users in the database TYPE quay_user_rows gauge quay_user_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 3 HELP quay_robot_rows number of robot accounts in the database TYPE quay_robot_rows gauge quay_robot_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 2 HELP quay_org_rows number of organizations in the database TYPE quay_org_rows gauge quay_org_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 2 HELP quay_repository_rows number of repositories in the database TYPE quay_repository_rows gauge quay_repository_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 4 HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner TYPE quay_security_scanning_unscanned_images_remaining gauge quay_security_scanning_unscanned_images_remaining{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 5", "HELP quay_queue_items_available number of queue items that have not expired TYPE quay_queue_items_available gauge quay_queue_items_available{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0 HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked TYPE quay_queue_items_available_unlocked gauge quay_queue_items_available_unlocked{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0 HELP quay_queue_items_locked number of queue items that have been acquired TYPE quay_queue_items_locked gauge quay_queue_items_locked{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0", "TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "TYPE quay_multipart_uploads_completed_created gauge quay_multipart_uploads_completed_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823308284895e+09 HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed TYPE quay_multipart_uploads_completed_total counter quay_multipart_uploads_completed_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_multipart_uploads_started_created gauge quay_multipart_uploads_started_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823308284352e+09 HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started TYPE quay_multipart_uploads_started_total counter quay_multipart_uploads_started_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry TYPE quay_registry_image_pushed_bytes_total counter quay_registry_image_pushed_bytes_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\"} 0", "TYPE quay_authentication_attempts_created gauge quay_authentication_attempts_created{auth_kind=\"basic\",host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\",success=\"True\"} 1.6317843039374158e+09 HELP quay_authentication_attempts_total number of authentication attempts across the registry and API TYPE quay_authentication_attempts_total counter quay_authentication_attempts_total{auth_kind=\"basic\",host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\",success=\"True\"} 2", "FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true", "FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true", "QUOTA_BACKFILL: true", "podman pull ubuntu:18.04", "podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1", "podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2", "podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1", "podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2", "podman pull ubuntu:18.04", "podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "podman pull nginx", "podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04", "Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[]", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq", "\"Created\"", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[ { \"id\": 1, \"limit_bytes\": 10485760, \"default_config\": false, \"limits\": [], \"default_config_exists\": false } ]", "curl -k -X PUT -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq", "{ \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [], \"default_config_exists\": false }", "podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq", "{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }", "podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true'", "{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq", "{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]", "podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04", "Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace", "PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true", "PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true", "FEATURE_AUTO_PRUNE: true", "DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 2 1", "DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 5d", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test2", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test3", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test4", "podman push <quay-server.example.com>/quayadmin/busybox:test", "podman push <quay-server.example.com>/<quayadmin>/busybox:test2", "podman push <quay-server.example.com>/<quayadmin>/busybox:test3", "podman push <quay-server.example.com>/<quayadmin>/busybox:test4", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test2", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test3", "podman tag docker.io/library/busybox <quay-server.example.com>/<quayadmin>/busybox:test4", "podman push <quay-server.example.com>/quayadmin/busybox:test", "podman push <quay-server.example.com>/<quayadmin>/busybox:test2", "podman push <quay-server.example.com>/<quayadmin>/busybox:test3", "podman push <quay-server.example.com>/<quayadmin>/busybox:test4", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\"}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/", "{\"uuid\": \"73d64f05-d587-42d9-af6d-e726a4a80d6e\"}", "{\"detail\": \"Policy for this namespace already exists, delete existing to create new policy\", \"error_message\": \"Policy for this namespace already exists, delete existing to create new policy\", \"error_type\": \"invalid_request\", \"title\": \"invalid_request\", \"type\": \"http://<quay-server.example.com>/api/v1/error/invalid_request\", \"status\": 400}", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/", "{\"policies\": [{\"uuid\": \"73d64f05-d587-42d9-af6d-e726a4a80d6e\", \"method\": \"creation_date\", \"value\": \"7d\"}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/73d64f05-d587-42d9-af6d-e726a4a80d6e", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/<user>/autoprunepolicy/", "{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\"}", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/<user>/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/<user>/autoprunepolicy/", "{\"policies\": [{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\", \"method\": \"number_of_tags\", \"value\": 10}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/<user>/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859", "{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\"}", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/<repository_name>:test", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/<repository_name>:test2", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/<repository_name>:test3", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/<repository_name>:test4", "podman push <quay-server.example.com>/<organization_name>/<repository_name>:test", "podman push <quay-server.example.com>/<organization_name>/<repository_name>:test2", "podman push <quay-server.example.com>/<organization_name>/<repository_name>:test3", "podman push <quay-server.example.com>/<organization_name>/<repository_name>:test4", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"creation_date\", \"value\": \"7d\"}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/", "{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\"}", "{\"detail\": \"Policy for this namespace already exists, delete existing to create new policy\", \"error_message\": \"Policy for this namespace already exists, delete existing to create new policy\", \"error_type\": \"invalid_request\", \"title\": \"invalid_request\", \"type\": \"http://quay-server.example.com/api/v1/error/invalid_request\", \"status\": 400}", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7", "{\"policies\": [{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\", \"method\": \"number_of_tags\", \"value\": 10}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7", "{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\"}", "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/", "{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\"}", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/", "curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/7726f79c-cbc7-490e-98dd-becdc6fefce7", "{\"policies\": [{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\", \"method\": \"number_of_tags\", \"value\": 2}]}", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/7726f79c-cbc7-490e-98dd-becdc6fefce7", "{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\"}", "FEATURE_STORAGE_REPLICATION: true DISTRIBUTED_STORAGE_CONFIG: usstorage: - RHOCSStorage - access_key: <access_key> bucket_name: <example_bucket> hostname: my.noobaa.hostname is_secure: false port: \"443\" secret_key: <secret_key> storage_path: /datastorage/registry eustorage: - S3Storage - host: s3.amazon.com port: \"443\" s3_access_key: <access_key> s3_bucket: <example bucket> s3_secret_key: <secret_key> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage", "DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage", "podman exec -it <container_id>", "scl enable python27 bash", "python -m util.backfillreplication", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage registry.redhat.io/quay/quay-rhel8:v3.12.8", "python -m util.backfillreplication", "podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay", "podman exec -it postgresql-quay -- /bin/bash", "bash-4.4USD psql", "quay=# select * from imagestoragelocation;", "id | name ----+------------------- 1 | usstorage 2 | eustorage", "\\q", "bash-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage", "psql -U <username> -h <hostname> -p <port> -d <database_name>", "CREATE DATABASE quay;", "\\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;", "sudo dnf install -y podman run -d --name redis -p 6379:6379 redis", "SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DATABASE_SECRET_KEY: 0ce4f796-c295-415b-bf9d-b315114704b8 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true", "oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage", "python -m util.backfillreplication", "oc get pod -n <quay_namespace>", "quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm", "oc rsh quay390usstorage-quay-app-5779ddc886-2drh2", "sh-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage", "podman exec quay python3 tools/generatekeypair.py quay-readonly", "cd <USDQUAY>/quay && virtualenv -v venv", "source venv/bin/activate", "venv/bin/pip install --upgrade pip", "cat << EOF > requirements-generatekeys.txt cryptography==3.4.7 pycparser==2.19 pycryptodome==3.9.4 pycryptodomex==3.9.4 pyjwkest==1.4.2 PyJWT==1.7.1 Authlib==1.0.0a2 EOF", "venv/bin/pip install -r requirements-generatekeys.txt", "PYTHONPATH=. venv/bin/python /<path_to_cloned_repo>/tools/generatekeypair.py quay-readonly", "Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem", "deactivate", "podman exec -it postgresql-quay psql -U postgres -d quay", "quay=# select * from servicekeyapproval;", "id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 |", "quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}');", "INSERT 0 1", "quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES (\"ServiceKeyApprovalType.SUPERUSER\", \"CURRENT_DATE\", {include_notes_here_on_why_this_is_being_added});", "INSERT 0 1", "UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly';", "UPDATE 1", "podman stop <quay_container_name_on_virtual_machine_a>", "podman stop <quay_container_name_on_virtual_machine_b>", "cp quay-readonly.kid quay-readonly.pem USDQuay/config", "setfacl -m user:1001:rw USDQuay/config/*", "REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'", "podman run -d --rm -p 80:8080 -p 443:8443 --name=quay-main-app -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "podman push <quay-server.example.com>/quayadmin/busybox:test", "613be09ab3c0: Preparing denied: System is currently read-only. Pulls will succeed but all write operations are currently suspended.", "REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'", "podman restart <container_id>", "quay=# UPDATE servicekey SET expiration_date = 'new-date' WHERE id = servicekey_id;", "SELECT id, name, expiration_date FROM servicekey;", "mkdir /tmp/quay-backup", "podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "cd /opt/quay-install", "tar cvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "podman inspect quay-app | jq -r '.[0].Config.CreateCommand | .[]' | paste -s -d ' ' - /usr/bin/podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml", "grep DB_URI /tmp/quay-backup/quay-config.yaml", "postgresql://<username>:[email protected]/quay", "pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> s3_region: <region>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2", "download: s3://<user_name>/registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a to registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a download: s3://<user_name>/registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d to registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d", "mkdir /opt/new-quay-install", "cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/", "cd /opt/new-quay-install/", "tar xvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "grep DB_URI config.yaml", "postgresql://<username>:[email protected]/quay", "sudo postgres", "psql \"host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123\" postgres=> CREATE DATABASE example_restore_registry_quay_database;", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm;", "CREATE EXTENSION", "\\q", "psql \"host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123\" -W < /tmp/quay-backup/quay-backup.sql", "SET SET SET SET SET", "cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_region: <region> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 mb s3://<new_bucket_name> --region us-east-2", "make_bucket: quay", "aws s3 sync --no-verify-ssl --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/", "upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> s3_region: <region> host: <host_name>", "mkdir /tmp/quay-backup cp /path/to/Quay/config/directory/config.yaml /tmp/quay-backup", "pg_dump -h DB_HOST -p 5432 -d QUAY_DATABASE_NAME -U QUAY_DATABASE_USER -W -O > /tmp/quay-backup/quay-database-backup.sql", "mkdir ~/.aws/", "grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/config.yaml", "DISTRIBUTED_STORAGE_CONFIG: minio-1: - RadosGWStorage - access_key: ########## bucket_name: quay hostname: 172.24.10.50 is_secure: false port: \"9000\" secret_key: ########## storage_path: /datastorage/registry", "touch ~/.aws/credentials", "cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF", "aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG", "mkdir /tmp/quay-backup/bucket-backup", "aws s3 sync --no-verify-ssl --endpoint-url https://PUBLIC_S3_ENDPOINT:PORT s3://QUAY_BUCKET/ /tmp/quay-backup/bucket-backup/", "oc scale --replicas=0 deployment quay-operator.v3.6.2 -n openshift-operators", "oc scale --replicas=0 deployment QUAY_MAIN_APP_DEPLOYMENT QUAY_MIRROR_DEPLOYMENT", "oc cp /tmp/user/quay-backup/quay-database-backup.sql quay-enterprise/quayregistry-quay-database-54956cdd54-p7b2w:/var/lib/pgsql/data/userdata", "oc get deployment quay-quay-app -o json | jq '.spec.template.spec.volumes[].projected.sources' | grep -i config-secret", "\"name\": \"QUAY_CONFIG_SECRET_NAME\"", "oc get secret quay-quay-config-secret-9t77hb84tb -o json | jq '.data.\"config.yaml\"' | cut -d '\"' -f2 | base64 -d -w0 > /tmp/quay-backup/operator-quay-config-yaml-backup.yaml", "cat /tmp/quay-backup/operator-quay-config-yaml-backup.yaml | grep -i DB_URI", "postgresql://QUAY_DATABASE_OWNER:PASSWORD@DATABASE_HOST/QUAY_DATABASE_NAME", "oc exec -it quay-postgresql-database-pod -- /bin/bash", "bash-4.4USD psql", "postgres=# DROP DATABASE \"example-restore-registry-quay-database\";", "DROP DATABASE", "postgres=# CREATE DATABASE \"example-restore-registry-quay-database\" OWNER \"example-restore-registry-quay-database\";", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example-restore-registry-quay-database=# create extension pg_trgm ;", "CREATE EXTENSION", "\\q", "bash-4.4USD psql -h localhost -d \"QUAY_DATABASE_NAME\" -U QUAY_DATABASE_OWNER -W < /var/lib/pgsql/data/userdata/quay-database-backup.sql", "SET SET SET SET SET", "bash-4.4USD exit", "touch config-bundle.yaml", "cat /tmp/quay-backup/config.yaml | grep SECRET_KEY > /tmp/quay-backup/config-bundle.yaml", "oc create secret generic new-custom-config-bundle --from-file=config.yaml=/tmp/quay-backup/config-bundle.yaml", "oc scale --replicas=1 deployment quayregistry-quay-app deployment.apps/quayregistry-quay-app scaled", "oc scale --replicas=1 deployment quayregistry-quay-mirror deployment.apps/quayregistry-quay-mirror scaled", "oc patch quayregistry QUAY_REGISTRY_NAME --type=merge -p '{\"spec\":{\"configBundleSecret\":\"new-custom-config-bundle\"}}'", "touch credentials.yaml", "grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/operator-quay-config-yaml-backup.yaml", "cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF", "oc get route s3 -n openshift-storage -o yaml -o jsonpath=\"{.spec.host}{'\\n'}\"", "aws s3 sync --no-verify-ssl --endpoint-url https://NOOBAA_PUBLIC_S3_ROUTE /tmp/quay-backup/bucket-backup/* s3://QUAY_DATASTORE_BUCKET_NAME", "oc scale -replicas=1 deployment quay-operator.v3.6.4 -n openshift-operators", "FEATURE_GENERAL_OCI_SUPPORT: true", "FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>", "ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar", "IGNORE_UNKNOWN_MEDIATYPES: true", "sudo podman logs <container_id>", "gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job \"GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-11-14 18:47:22 UTC)\" executed successfully", "podman logs quay-app", "gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] \"DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0\" 204 0 \"http://quay-server.example.com/repository/quayadmin/busybox?tab=tags\" \"Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\"", "TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "podman pull busybox", "podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test", "podman push quay-server.example.com/quayadmin/busybox:test", "{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}", "BRANDING: logo: 1 footer_img: 2 footer_url: 3 --- REGISTRY_TITLE: 4 REGISTRY_TITLE_SHORT: 5" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html-single/manage_red_hat_quay/index
A.6. Troubleshooting DNS
A.6. Troubleshooting DNS Many DNS problems are caused by misconfiguration. Therefore, make sure you meet the conditions in Section 2.1.5, "Host Name and DNS Configuration" . Use the dig utility to check the response from the DNS server: Use the host utility to perform a DNS name lookup: Review the DNS records in LDAP using the ipa dnszone-show command: For details on using the IdM tools to manage DNS, see Chapter 33, Managing DNS . Restart BIND to force resynchronization with LDAP: Get a list of the required DNS records: Use the dig utility to check if the displayed records are present in DNS. If you use the Identity Management DNS, use the ipa dns-update-system-records command to update any missing records.
[ "dig _ldap._tcp.ipa.example.com. SRV ; <<>> DiG 9.9.4-RedHat-9.9.4-48.el7 <<>> _ldap._tcp.ipa.example.com. SRV ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17851 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;_ldap._tcp.ipa.example.com. IN SRV ;; ANSWER SECTION: _ldap._tcp.ipa.example.com. 86400 IN SRV 0 100 389 ipaserver.ipa.example.com. ;; AUTHORITY SECTION: ipa.example.com. 86400 IN NS ipaserver.ipa.example.com. ;; ADDITIONAL SECTION: ipaserver.ipa.example.com. 86400 IN A 192.0.21 ipaserver.ipa.example.com 86400 IN AAAA 2001:db8::1", "host server.ipa.example.com server.ipa.example.com. 86400 IN A 192.0.21 server.ipa.example.com 86400 IN AAAA 2001:db8::1", "ipa dnszone-show zone_name USD ipa dnsrecord-show zone_name record_name_in_the_zone", "systemctl restart named-pkcs11", "ipa dns-update-system-records --dry-run" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-gen-dns
Chapter 2. Eclipse Temurin features
Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 11 release of Eclipse Temurin includes, see OpenJDK 11.0.23 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 11.0.23 release: XML Signature secure validation mode enabled by default In OpenJDK 11.0.23, XML Signature secure validation mode is enabled by default. To control restrictions and constraints for secure validation mode, you can use the jdk.xml.dsig.secureValidationPolicy system property. If you want to disable secure validation mode, ensure that the org.jcp.xml.dsig.secureValidation property is set to Boolean.FALSE by using the DOMValidateContext.setProperty() API. Before you disable secure validation mode, ensure that you consider any associated security risks. See JDK-8259801 (JDK Bug System) . XML Security for Java updated to Apache Santuario 3.0.3 In OpenJDK 11.0.23, the XML signature implementation is based on Apache Santuario 3.0.3. This enhancement introduces the following four SHA-3-based RSA-MGF1 SignatureMethod algorithms: SHA3_224_RSA_MGF1 SHA3_256_RSA_MGF1 SHA3_384_RSA_MGF1 SHA3_512_RSA_MGF1 Because the javax.xml.crypto.dsig.SignatureMethod API cannot be modified in update releases to provide constant values for the new algorithms, use the following equivalent string literal values for these algorithms: http://www.w3.org/2007/05/xmldsig-more#sha3-224-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-256-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-384-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-512-rsa-MGF1 This enhancement also introduces support for the ED25519 and ED448 elliptic curve algorithms, which are both Edwards-curve Digital Signature Algorithm (EdDSA) signature schemes. Note In contrast to the upstream community version of Apache Santuario 3.0.3, the JDK still supports the here() function. However, future support for the here() function is not guaranteed. You should avoid using here() in new XML signatures. You should also update any XML signatures that currently use here() to stop using this function. The here() function is enabled by default. To disable the here() function, ensure that the jdk.xml.dsig.hereFunctionSupported system property is set to false . See JDK-8319124 (JDK Bug System) . SystemTray.isSupported() method returns false on most Linux desktops In OpenJDK 11.0.23, the java.awt.SystemTray.isSupported() method returns false on systems that do not support the SystemTray API correctly. This enhancement is in accordance with the SystemTray API specification. The SystemTray API is used to interact with the taskbar in the system desktop to provide notifications. SystemTray might also include an icon representing an application. Due to an underlying platform issue, GNOME desktop support for taskbar icons has not worked correctly for several years. This platform issue affects the JDK's ability to provide SystemTray support on GNOME desktops. This issue typically affects systems that use GNOME Shell 44 or earlier. Note Because the lack of correct SystemTray support is a long-standing issue on some systems, this API enhancement to return false on affected systems is likely to have a minimal impact on users. See JDK-8322750 (JDK Bug System) . Certainly R1 and E1 root certificates added In OpenJDK 11.0.23, the cacerts truststore includes two Certainly root certificates: Certificate 1 Name: Certainly Alias name: certainlyrootr1 Distinguished name: CN=Certainly Root R1, O=Certainly, C=US Certificate 2 Name: Certainly Alias name: certainlyroote1 Distinguished name: CN=Certainly Root E1, O=Certainly, C=US See JDK-8321408 (JDK Bug System) . Revised on 2024-04-26 14:29:49 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.23/openjdk-temurin-features-11-0-23_openjdk
Chapter 5. Test cases
Chapter 5. Test cases After finishing the installation, it is recommended to run some basic tests to check the installation and verify how SAP HANA Multitarget System Replication is working and how it recovers from a failure. It is always a good practice to run these test cases before starting production. If possible, you can also prepare a test environment to verify the changes before applying them in production. All cases will describe: Subject of the test Test preconditions Test steps Monitoring the test Starting the test Expected result(s) Ways to return to an initial state To automatically register a former primary HANA replication site as a new secondary HANA replication site on the HANA instances that are managed by the cluster, you can use the option AUTOMATED_REGISTER=true in the SAPHana resource. For more details, refer to AUTOMATED_REGISTER . The names of the HA cluster nodes and the HANA replication sites (in brackets) used in the examples are: clusternode1 (DC1) clusternode2 (DC2) remotehost3 (DC3) The following parameters are used for configuring the HANA instances and the cluster: SID=RH2 INSTANCENUMBER=02 CLUSTERNAME=cluster1 You can use clusternode1-2, remotehost3 also as alias in the /etc/hosts in your test environment. The tests are described in more detail, including examples and additional checks of preconditions. At the end, there are examples of how to clean up the environment to be prepared for further testing. In some cases, if the distance between clusternode1-2 and remotehost3 is too long, you should use -replcationMode=async instead of -replicationMode=syncmem . Please also ask your SAP HANA administrator before choosing the right option. 5.1. Prepare the tests Before we run a test, the complete environment needs to be in a correct and healthy state. We have to check the cluster and the database via: pcs status --full python systemReplicationStatus.py df -h An example for pcs status --full can be found in Check cluster status with pcs status . If there are warnings or failures in the "Migration Summary", you should clean up the cluster before you start your test. [root@clusternode1]# pcs resource clear SAPHana_RH2_02-clone Cluster Cleanup describes some more ways to do it. It is important that the cluster and all the resources be started. Besides the cluster, the database should also be up and running and in sync. The easiest way to verify the proper status of the database is to check the system replication status. See also Replication Status . This should be checked on the primary database. To discover the primary node, you can check Discover Primary Database or use: pcs status | grep -E "Promoted|Master" hdbnsutil -sr_stateConfiguration Check if there is enough space on the file systems by running: # df -h Please also follow the guidelines for a system check before you continue. If the environment is clean, it is ready to run the tests. During the test, monitoring is helpful to observe progress. 5.2. Monitor the environment In this section, we are focusing on monitoring the environment during the tests. This section will only cover the necessary monitors to see the changes. It is recommended to run the monitors from a dedicated terminal. To be able to detect changes during the test, it is recommended to start monitoring before starting the test. In the Useful Commands section, more examples are shown. 5.2.1. Discover the primary node You need to discover the primary node to monitor a failover or run certain commands that only provide information about the replication status when executed on the primary node. To discover the primary node, you can run the following commands as the <sid>adm user: clusternode1:rh2adm> watch -n 5 'hdbnsutil -sr_stateConfiguration | egrep -e "primary masters|^mode"' Output example, when clusternode2 is the primary database: mode: syncmem primary masters: clusternode2 A second way to identify the primary node is to run the following command as root on a cluster node: # watch -n 5 'pcs status --full' Output on the node that runs the primary database is: mode: primary 5.2.2. Check the Replication status The replication status shows the relationship between primary and secondary database nodes and the current status of the replication. To discover the replication status, you can run as the <sid>adm user: clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration If you want to permanently monitor changes in the system replication status, please run the following command: clusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' This example repeatedly captures the replication status and also determines the current return code. As long as the return code (status) is 15, the replication status is fine. The other return codes are: 10: NoHSR 11: Error 12: Unknown 13: Initializing 14: Syncing 15: Active If you register a new secondary, you can run it in a separate window on the primary node, and you will see the progress of the replication. If you want to monitor a failover, you can run it in parallel on the old primary as well as on the new primary database server. For more information, please read Check SAP HANA System Replication Status . 5.2.3. Check /var/log/messages entries Pacemaker is writing a lot of information into the /var/log/messages file. During a failover, a huge number of messages are written into this message file. To be able to follow only the important messages depending on the SAP HANA resource agent, it is useful to filter the detailed activities of the pacemaker SAP resources. It is enough to check the message file on a single cluster node. For example, you can use this alias: # alias tmsl='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_USD{SAPSYSTEMNAME}_HDBUSD{TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT"' Run this alias in a separate window to monitor the progress of the test. Please also check the example Monitor failover and sync state . 5.2.4. Cluster status There are several ways to check the cluster status. Check if the cluster is running: pcs cluster status Check the cluster and all resources: pcs status Check the cluster, all resources and all node attributes: pcs status --full Check the resources only: pcs resource The pcs status --full command will give you all the necessary information. To monitor changes, you can run this command together with watch. # pcs status --full If you want to see changes, you can run, in a separate window, the command watch : # watch pcs status --full An output example and further options can be found in Check cluster status . 5.2.5. Discover leftovers To ensure that your environment is ready to run the test, leftovers from tests need to be fixed or removed. stonith is used to fence a node in the cluster: Detect: [root@clusternode1]# pcs stonith history Fix: [root@clusternode1]# pcs stonith cleanup Multiple primary databases: Detect: clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration | grep -i primary All nodes with the same primary need to be identified. Fix: clusternode1:rh2adm> re-register the wrong primary with option --force_full_replica Location Constraints caused by move: Detect: [root@clusternode1]# pcs constraint location Check the warning section. Fix: [root@clusternode1]# pcs resource clear <clone-resource-which was moved> Secondary replication relationship: Detect: on the primary database run clusternode1:rh2adm> python USD{DIR_EXECUTABLES}/python_support/systemReplicationStatus.py Fix: unregister and re-register the secondary databases. Check siteReplicationMode (same output on all SAP HANA nodes clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode Pcs property: Detect: [root@clusternode1]# pcs property config Fix: [root@clusternode1]# pcs property set <key=value> Clear maintenance_mode [root@clusternode1]# pcs property set maintenance-mode=false log_mode : Detect: clusternode1:rh2adm> python systemReplicationStatus.py Will respond in the replication status that log_mode normally is required. log_mode can be detected as described in Using hdbsql to check Inifile contents . Fix: change the log_mode to normal and restart the primary database. CIB entries: Detect: SFAIL entries in the cluster information base. Please refer to Check cluster consistency , to find and remove CIB entries. Cleanup/clear: Detect: [root@clusternode1]# pcs status --full Sometimes it shows errors or warnings. You can cleanup/clear resources and if everything is fine, nothing happens. Before running the test, you can cleanup your environment. Examples to fix: [root@clusternode1]# pcs resource clear <name-of-the-clone-resource> [root@clusternode1]# pcs resource cleanup <name-of-the-clone-resource> This is also useful if you want to check if there is an issue in an existing environment. For more information, please refer to Useful commands . 5.3. Test 1:Failover of the primary node with an active third site Subject of the test Automatic re-registration of the third site. Sync state changes to SOK after clearing. Test preconditions SAP HANA on DC1, DC2, DC3 are running. Cluster is up and running without errors or warnings. Test steps Move the SAPHana resource using the [root@clusternode1]# pcs resource move <sap-clone-resource> <target-node> command. Monitoring the test On the third site run as sidadm the command provided at the end of table.(*) On the secondary node run as root: [root@clusternode1]# watch pcs status --full Starting the test Execute the cluster command: [root@clusternode1] pcs move resource SAPHana_RH2_02-clone [root@clusternode1]# pcs resource clear SAPHana_RH2_02-clone Expected result In the monitor command on site 3 the primary master changes from clusternode1 to clusternode2. After clearing the resource the sync state will change from SFAIL to SOK . Ways to return to an initial state Run the test twice. (*) remotehost3:rh2adm> watch hdbnsutil -sr_state [root@clusternode1]# tail -1000f /var/log/messages |egrep -e 'SOK|SWAIT|SFAIL' Detailed description Check the initial state of your cluster as root on clusternode1 or clusternode2: [root@clusternode1]# pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled This output shows you that HANA is promoted on clusternode1 which is the primary SAP HANA server, and that the name of the clone resource is SAPHana_RH2_02-clone, which is promotable. You can run this in a separate window during the test to see the changes. [root@clusternode1]# watch pcs status --full Another way to identify the name of the SAP HANA clone resource is: [root@clusternode2]# pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * Started: [ clusternode1 clusternode2 ] * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * Promoted: [ clusternode2 ] * Unpromoted: [ clusternode1 ] To see the change of the primary server start monitoring on remotehost3 on a separate terminal window before you start the test. remotehost3:rh2adm> watch 'hdbnsutil -sr_state | grep "primary masters" The output will look like: Every 2.0s: hdbnsutil -sr_state | grep "primary masters" remotehost3: Mon Sep 4 08:47:21 2023 primary masters: clusternode1 During the test the expected output will change to clusternode2. Start the test by moving the clone resource discovered above to clusternode2: [root@clusternode1]# pcs resource move SAPhana_RH2_02-clone clusternode2 The output of the monitor on remotehost3 will change to: Every 2.0s: hdbnsutil -sr_state | grep "primary masters" remotehost3: Mon Sep 4 08:50:31 2023 primary masters: clusternode2 Pacemaker creates a location constraint for moving the clone resource. This needs to be manually removed. You can see the constraint using: [root@clusternode1]# pcs constraint location This constraint needs to be removed by executing the following steps. Clear the clone resource to remove the location constraint: [root@clusternode1]# pcs resource clear SAPhana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone Cleanup the resource: [root@clusternode1]# pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done) Result of the test The "primary masters" monitor on remotehost3 should show an immediate switch to the new primary node. If you check the cluster status, the former secondary will be promoted, the former primary gets re-registered, and the Clone_State changes from Promoted to Undefined to WAITINGFORLPA to DEMOTED . The secondary will change the sync_state to SFAIL when the SAPHana monitor is started for the first time after the failover. Because of existing location constraints, the resource needs to be cleared, and after a short time, the sync_state of the secondary will change to SOK again. Secondary gets promoted. To restore the initial state you can simply run the test. After finishing the tests please run a Cluster Cleanup . 5.4. Test 2:Failover of the primary node with passive third site Subject of the test No registration of the third site. Failover works even if the third site is down. Test preconditions SAP HANA on DC1, DC2 is running and is stopped on DC3. Cluster is up and running without errors or warnings. Test steps Move the SAPHana resource using the pcs move command. Starting the test Execute the cluster command: [root@clusternode1]# pcs move resource SAPHana_RH2_02-clone Monitoring the test On the third site run as sidadm : % watch hdbnsutil -sr_stateConfiguration On the cluster nodes run as root: [root@clusternode1]# watch pcs status Expected result No change on DC3. Replication stays on old relationship. Ways to return to an initial state Re-register DC3 on new primary and start SAP HANA. Detailed description Check the initial state of your cluster as root on clusternode1 or clusternode2: [root@clusternode1]# pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled This output of this example shows you that HANA is promoted on clusternode1, which is the primary SAP HANA server, and that the name of the clone resource is SAPHana_RH2_02-clone , which is promotable. If you run test 3 before HANA, it might be promoted on clusternode2. Stop the database on remotehost3: remotehost3:rh2adm> HDB stop hdbdaemon will wait maximal 300 seconds for NewDB services finishing. Stopping instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function Stop 400 12.07.2023 11:33:14 Stop OK Waiting for stopped instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function WaitforStopped 600 2 12.07.2023 11:33:30 WaitforStopped OK hdbdaemon is stopped. Check the primary database on remotehost3: remotehost3:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i "primary masters" primary masters: clusternode2 Check the current primary in the cluster on a cluster node: [root@clusternode1]# pcs resource | grep Masters * Masters: [ clusternode2 ] Check the sr_state to see the SAP HANA System Replication relationships: clusternode2remotehost3:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 2 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC3] remotehost3 clusternode1 -> [DC1] clusternode1 clusternode1 -> [DC2] clusternode2 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC3 (syncmem/logreplay) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC3: 2 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC3: syncmem Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC3: logreplay Operation mode of DC2: logreplay Mapping: DC1 -> DC3 Mapping: DC1 -> DC2 done. The SAP HANA System Replication relations still have one primary (DC1), which is replicated to DC2 and DC3. The replication relationship on remotehost3, which is down, can be displayed using: remothost3:rh2adm> hdbnsutil -sr_stateConfiguration System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 3 site name: DC3 active primary site: 1 primary masters: clusternode1 done. The database on remotehost3 which is offline checks the entries in the global.ini file. Starting the test: Initiate a failover in the cluster, moving the SAPHana-clone-resource example: [root@clusternode1]# pcs resource move SAPHana_RH2_02-clone clusternode2 Note If SAPHana is promoted on clusternode2, you have to move the clone resource to clusternode1. The example expects that SAPHana is promoted on clusternode1. There will be no output. Similar to the former test, a location constraint will be created, which can be displayed with: [root@clusternode1]# pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started) Even if the cluster looks fine again, this constraint avoids another failover unless the constraint is removed. One way is to clear the resource. Clear the resource: [root@clusternode1]# pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started) [root@clusternode1]# pcs resource clear SAPHana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone Cleanup the resource: [root@clusternode1]# pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done) Check the current status. There are three ways to display the replication status, which needs to be in sync. Starting with the primary on remotehost3: remotehost3clusternode2:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i primary active primary site: 1 primary masters: clusternode1 The output shows site 1 or clusternode1, which was the primary before starting the test to move the primary to clusternode2. check the system replication status on the new primary. First detect the new primary: [root@clusternode1]# pcs resource | grep Master * Masters: [ clusternode2 ] Here we have an inconsistency, which requires us to re-register remotehost3. You might think that if we run the test again, we might switch the primary back to the original clusternode1. In this case, we have a third way to identify if system replication is working. On the primary node run: If you don't see remotehost3 in this output, you have to re-register remotehost3. Before registering, please run the following on the primary node to watch the progress of the registration: clusternode2:rh2adm> watch python USD{DIR_EXECUTABLES}/python_support/systemReplicationStatus.py Now you can re-register remotehost3 using this command: remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode2 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC2 --operation Mode=logreplay --online adding site ... collecting information ... updating local ini files ... done. Even if the database on remotehost3 is not started yet, you are able to see the third site in the system replication status output. The registration can be finished by starting the database on remotehost3: remotehost3:rh2adm> HDB start StartService Impromptu CCC initialization by 'rscpCInit'. See SAP note 1266393. OK OK Starting instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function StartWait 2700 2 04.09.2023 11:36:47 Start OK The monitor started above will immediately show the synchronization of remotehost3. To switch back, run the test again. One optional test is to switch the primary to the node, which is configured on the global.ini on remotehost3 and then starting the database. The database might come up, but it will never be shown in the output of the system replication status unless it is re-registered. The missing entry will be immediately created, and the system replication will start as soon as the SAP HANA database is started. You can check this by executing: sidadm@clusternode1% hdbnsutil -sr_state sidadm@clusternode1% python systemReplicationStatus.py ; echo USD? You can find more information in Check SAP HANA System Replication status . 5.5. Test 3:Failover of the primary node to the third site Subject of the test Failover the primary to the third site.. Third site becomes primary. Secondary will be re-registered to third site. Test preconditions SAP HANA on DC1, DC2, DC3 is running. Cluster is up and running without errors or warnings. System Replication is in place and in sync (check % python systemReplicationStatus.py ). Test steps Put the cluster into maintenance-mode to be able to recover. Takeover the HANA database form the third node using: % hdbnsuttil -sr_takeover Starting the test Execute the SAP HANA command on remotehost3:rh2adm>: hdbnsutil -sr_takeover Monitoring the test On the third site run as sidadm % watch hdbnsutil -sr_state Expected result Third node will become primary. Secondary node will change the primary master to remotehost3. Former primary node needs to be re-registered to the new primary. Ways to return to an initial state Run Test 4: Failback of the primary node to the first site . Detailed description Check if the databases are running using Check database and check the replication status: clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" The output is, for example: mode: syncmem primary masters: clusternode1 In this case, the primary database is clusternode1. If you run this command on clusternode1, you will get: mode: primary On this primary node, you can also display the system replication status. It should look like this: Now we have a proper environment, and we can start monitoring the system replication status on all 3 nodes in separate windows. The 3 monitors should be started before the test is started. The output will change when the test is executed. So keep them running as long as the test is not completed. On the old primary node, clusternode1 ran in a separate window during the test: clusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' The output on clusternode1 will be: On remotehost3, run the same command: remotehost3:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' The response will be: this system is either not running or is not primary system replication site This will change after the test initiates the failover. The output looks similar to the example of the primary node before the test was started. On the second node, start: clusternode2:rh2adm> watch -n 10 'hdbnsutil -sr_state | grep masters' This will show the current master clusternode1 and will switch immediately after the failover is initiated. To ensure that everything is configured correctly, please also check the global.ini . Check global.ini on DC1, DC2, and DC3: On all three nodes, the global.ini should contain: [persistent] log_mode=normal [system_replication] register_secondaries_on_takeover=true You can edit the global.ini with: clusternode1:rh2adm>vim /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini [Optional] Put the cluster into maintenance-mode : [root@clusternode1]# pcs property set maintenance-mode=true During the tests, you will find out that the failover will work with and without setting the maintenance-mode . So you can run the first test without it. While recovering, it should be done; I just want to show you that it works with and without. This is an option if the primary is not accessible. Start the test: Failover to DC3. On remotehost3, please run: remotehost3:rh2adm> hdbnsutil -sr_takeover done. The test has started, and now please check the output of the previously started monitors. On the clusternode1, the system replication status will lose its relationship to remotehost3 and clusternode2 (DC2): The cluster still doesn't notice this behavior. If you check the return code of the system replication status, Returncode 11 means error, which tells you something is wrong. If you have access, it is a good idea to enter maintenance-mode now. The remotehost3 becomes the new primary, and clusternode2 (DC2) gets automatically registered as the new primary on the remotehost3. Example output of the system replication state of remotehost3: The returncode 15 also says everything is okay, but clusternode1 is missing. This must be re-registered manually. The former primary clusternode1 is not listed, so the replication relationship is lost. Set maintenance-mode . If not already done before, set maintenance-mode on the cluster on one node of the cluster with the command: [root@clusternode1]# pcs property set maintenance-mode=true You can check if the maintenance-mode is active by running this command: [root@clusternode1]# pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2node2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1node1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2node2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1node1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1node1 (unmanaged) The resources are displaying unmanaged, this indicates that the cluster is in maintenance-mode=true . The virtual IP address is still started on clusternode1. If you want to use this IP on another node, please disable vip_RH2_02_MASTER before you set maintanence-mode=true. [root@clusternode1]# pcs resource disable vip_RH2_02_MASTER Re-register clusternode1. When we check the sr_state on clusternode1, you will see a relationship only to DC2: clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done. But when we check DC2, the primary database server is DC3. So the information from DC1 is not correct. clusternode2:rh2adm> hdbnsutil -sr_state If we check the system replication status on DC1, the returncode is 12, which is unknown. So DC1 needs to be re-registered. You can use this command to register the former primary clusternode1 as a new secondary of remotehost3. clusternode1:rh2adm> hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=USD{TINSTANCE} --replicationMode=asyncsyncmem --name=DC1 --remoteName=DC3 --operationMode=logreplay --online After the registration is done, you will see on remotehost3 all three sites replicated, and the status (return code) will change to 15. If this fails, you have to manually remove the replication relationships on DC1 and DC3. Please follow the instructions described in Register Secondary . For example, list the existing relationships with: clusternode1:rh2adm> hdbnsutil -sr_state To remove the existing relationships you can use: clusternode1:rh2adm> hdbnsutil -sr_unregister --name=DC2` This may not usually be necessary. We assume that test 4 will be performed after test 3. So the recovery step is to run test 4. 5.6. Test 4:Failback of the primary node to the first site Subject of the test Primary switch back to a cluster node. Failback and enable the cluster again. Re-register the third site as secondary. Test preconditions SAP HANA primary node is running on third site. Cluster is partly running. Cluster is put into maintenance_mode . Former cluster primary is detectable. Test steps Check the expected primary of the cluster. Failover from the DC3 node to the DC1 node. Check if the former secondary has switched to the new primary. Re-register remotehost3 as a new secondary. Set cluster maintenance_mode=false and the cluster continues to work. Monitoring the test On the new primary start: remotehost3:rh2adm> watch python USD{DIR_EXECUTABLES}/python_support/systemReplicationStatus.py [root@clusternode1]# watch pcs status --full On the secondary start: clusternode:rh2adm> watch hdbnsutil -sr_state Starting the test Check the expected primary of the cluster: [root@clusternode1]# pcs resource . VIP and promoted SAP HANA resources should run on the same node which is the potential new primary. On this potential primary run as sidadm : clusternode1:rh2adm> hdbnsutil -sr_takeover Re-register the former primary as new secondary: clusternode1:rh2adm> hdbnsutil -sr_register \ --remoteHost=clusternode1 \ --remoteInstance=USD{TINSTANCE} \ --replicationMode=syncmem \ --name=DC3 \ --remoteName=DC1 \ --operationMode=logreplay \ --force_full_replica \ --online Cluster continues to work after setting the maintenance_mode=false . Expected result New primary is starting SAP HANA. The replication status will show all 3 sites replicated. Second cluster site gets automatically re-registered to the new primary. DR site becomes an additional replica of the database. Ways to return to an initial state Run test 3. Detailed description Check if the cluster is put into maintenance-mode : [root@clusternode1]# pcs property config maintenance-mode Cluster Properties: maintenance-mode: true If the maintenance-mode is not true you can set it with: [root@clusternode1]# pcs property set maintenance-mode=true Check the system replication status and discover the primary database on all nodes. First of all, discover the primary database using: clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" The output should be as follows: On clusternode1: clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" mode: syncmem primary masters: remotehost3 On clusternode2: clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" mode: syncmem primary masters: remotehost3 On remotehost3: remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" mode: primary On all three nodes, the primary database is remotehost3. On this primary database, you have to ensure that the system replication status is active for all three nodes and the return code is 15: Check if all three sr_states are consistent. Please run on all three nodes, hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode : clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode The output should be the same on all nodes: siteReplicationMode/DC1=primary siteReplicationMode/DC3=async siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay Start monitoring in separate windows. On clusternode1, start: clusternode1:rh2adm> watch "python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \USD?" On remotehost3, start: remotehost3:rh2adm>watch "python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \USD?" On clusternode2, start: clusternode2:rh2adm> watch "hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode" Start the test. To failover to clusternode1, start on clusternode1: clusternode1:rh2adm> hdbnsutil -sr_takeover done. Check the output of the monitors. The monitor on clusternode1 will change to: Important is also the return code 15. The monitor on clusternode2 will change to: Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC2=logreplay DC3 is gone and needs to be re-registered. On remotehost3, the systemReplicationStatus reports an error, and the returncode changes to 11. Check if cluster nodes get re-registered: clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done. The Site Mapping shows that clusternode2 (DC2) was re-registered. Check or enable the vip resource: [root@clusternode1]# pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Stopped (disabled, unmanaged) The vip resource vip_RH2_02_MASTER is stopped. To start it again run: [root@clusternode1]# pcs resource enable vip_RH2_02_MASTER Warning: 'vip_RH2_02_MASTER' is unmanaged The warning is right because the cluster will not start any resources unless maintenance-mode=false . Stop cluster maintenance-mode . Before we stop the maintenance-mode , we should start two monitors in separate windows to see the changes. On clusternode2, run: [root@clusternode2]# watch pcs status --full On clusternode1, run: clusternode1:rh2adm> watch "python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo USD?" Now you can unset the maintenance-mode on clusternode1 by running: [root@clusternode1]# pcs property set maintenance-mode=false The monitor on clusternode1 should show you that everything is running now as expected: After manual interaction, it is always good advice to cleanup the cluster, as described in Cluster Cleanup . Re-register remotehost3 to the new primary on clusternode1. Remotehost3 needs to be re-registered. To monitor the progress, please start on clusternode1: con_cluster_cleanupclusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' On remotehost3, please start: remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode' Now you can re-register remotehost3 with this command: remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online The monitor on clusternode1 will change to: And the monitor of remotehost3 will change to: Now we have again 3 entries, and remotehost3 (DC3) is again a secondary site replicated from clusternode1 (DC1). Check if all nodes are part of the system replication status on clusternode1. Please run on all three nodes, hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode : clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode On all nodes, we should get the same output: siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay Check pcs status --full and SOK. Run: [root@clusternode1]# pcs status --full| grep sync_state The output should be either PRIM or SOK: * hana_rh2_sync_state : PRIM * hana_rh2_sync_state : SOK Finally, the cluster status should look like this, including the sync_state PRIM and SOK: [root@clusternode1]# pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:18:52 2023 * Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693873014 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled Refer to Check cluster status and Check database to verify that all works fine again.
[ "pcs resource clear SAPHana_RH2_02-clone", "df -h", "clusternode1:rh2adm> watch -n 5 'hdbnsutil -sr_stateConfiguration | egrep -e \"primary masters|^mode\"'", "mode: syncmem primary masters: clusternode2", "watch -n 5 'pcs status --full'", "mode: primary", "clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration", "clusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "alias tmsl='tail -1000f /var/log/messages | egrep -s \"Setting master-rsc_SAPHana_USD{SAPSYSTEMNAME}_HDBUSD{TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT\"'", "pcs status --full", "watch pcs status --full", "remotehost3:rh2adm> watch hdbnsutil -sr_state tail -1000f /var/log/messages |egrep -e 'SOK|SWAIT|SFAIL'", "pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "watch pcs status --full", "pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * Started: [ clusternode1 clusternode2 ] * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * Promoted: [ clusternode2 ] * Unpromoted: [ clusternode1 ]", "remotehost3:rh2adm> watch 'hdbnsutil -sr_state | grep \"primary masters\"", "Every 2.0s: hdbnsutil -sr_state | grep \"primary masters\" remotehost3: Mon Sep 4 08:47:21 2023 primary masters: clusternode1", "pcs resource move SAPhana_RH2_02-clone clusternode2", "Every 2.0s: hdbnsutil -sr_state | grep \"primary masters\" remotehost3: Mon Sep 4 08:50:31 2023 primary masters: clusternode2", "pcs constraint location", "pcs resource clear SAPhana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone", "pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done)", "pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "remotehost3:rh2adm> HDB stop hdbdaemon will wait maximal 300 seconds for NewDB services finishing. Stopping instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function Stop 400 12.07.2023 11:33:14 Stop OK Waiting for stopped instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function WaitforStopped 600 2 12.07.2023 11:33:30 WaitforStopped OK hdbdaemon is stopped.", "remotehost3:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i \"primary masters\" primary masters: clusternode2", "pcs resource | grep Masters * Masters: [ clusternode2 ]", "clusternode2remotehost3:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 2 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC3] remotehost3 clusternode1 -> [DC1] clusternode1 clusternode1 -> [DC2] clusternode2 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC3 (syncmem/logreplay) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC3: 2 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC3: syncmem Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC3: logreplay Operation mode of DC2: logreplay Mapping: DC1 -> DC3 Mapping: DC1 -> DC2 done.", "remothost3:rh2adm> hdbnsutil -sr_stateConfiguration System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 3 site name: DC3 active primary site: 1 primary masters: clusternode1 done.", "pcs resource move SAPHana_RH2_02-clone clusternode2", "pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started)", "pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started) pcs resource clear SAPHana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone", "pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done)", "remotehost3clusternode2:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i primary active primary site: 1 primary masters: clusternode1", "pcs resource | grep Master * Masters: [ clusternode2 ]", "clusternode2:rh2adm> cdpy clusternode2:rh2adm> python USD{DIR_EXECUTABLES}/python_support/systemReplicationStatus.py |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode2 |30201 |nameserver | 1 | 2 |DC2 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode2 |30207 |xsengine | 2 | 2 |DC2 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode2 |30203 |indexserver | 3 | 2 |DC2 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"1\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 2 site name: DC2", "clusternode2:rh2adm> watch python USD{DIR_EXECUTABLES}/python_support/systemReplicationStatus.py", "remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode2 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC2 --operation Mode=logreplay --online adding site collecting information updating local ini files done.", "remotehost3:rh2adm> HDB start StartService Impromptu CCC initialization by 'rscpCInit'. See SAP note 1266393. OK OK Starting instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function StartWait 2700 2 04.09.2023 11:36:47 Start OK", "sidadm@clusternode1% hdbnsutil -sr_state sidadm@clusternode1% python systemReplicationStatus.py ; echo USD?", "clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\"", "mode: syncmem primary masters: clusternode1", "mode: primary", "clusternode1:rh2adm> cdpy clusternode1:rh2adm> python systemReplicationStatus.py |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | True | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"3\": ACTIVE status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1", "clusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "Every 5.0s: python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicati... clusternode1: Tue XXX XX HH:MM:SS 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary | Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status | Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- | ----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES | ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES | ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES | ASYNC |ACTIVE | | True | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES | SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES | SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES | SYNCMEM |ACTIVE | | True | status system replication site \"3\": ACTIVE status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 Status 15", "remotehost3:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "this system is either not running or is not primary system replication site", "clusternode2:rh2adm> watch -n 10 'hdbnsutil -sr_state | grep masters'", "[persistent] log_mode=normal [system_replication] register_secondaries_on_takeover=true", "clusternode1:rh2adm>vim /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini", "pcs property set maintenance-mode=true", "remotehost3:rh2adm> hdbnsutil -sr_takeover done.", "Every 5.0s: python /usr/sap/RH2/HDB02/exe/python_support/systemReplicationStatus.py ; echo Status USD? clusternode1: Mon Sep 4 11:52:16 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replic ation |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |------ ---------------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ERROR |Commun ication channel closed | False | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ERROR |Commun ication channel closed | False | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ERROR |Commun ication channel closed | False | status system replication site \"2\": ERROR overall system replication status: ERROR Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 Status 11", "Every 5.0s: python /usr/sap/RH2/HDB02/exe/python_support/systemReplicationStatus.py ; echo Status USD? remotehost3: Mon Sep 4 13:55:29 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replic ation |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |------ -------- |------------ | |SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 3 site name: DC3 Status 15", "pcs property set maintenance-mode=true", "pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2node2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1node1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2node2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1node1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1node1 (unmanaged)", "pcs resource disable vip_RH2_02_MASTER", "clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done.", "clusternode2:rh2adm> hdbnsutil -sr_state", "clusternode1:rh2adm> hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=USD{TINSTANCE} --replicationMode=asyncsyncmem --name=DC1 --remoteName=DC3 --operationMode=logreplay --online", "clusternode1:rh2adm> hdbnsutil -sr_state", "clusternode1:rh2adm> hdbnsutil -sr_unregister --name=DC2`", "pcs property config maintenance-mode Cluster Properties: maintenance-mode: true", "pcs property set maintenance-mode=true", "clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\"", "clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\" mode: syncmem primary masters: remotehost3", "clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\" mode: syncmem primary masters: remotehost3", "remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\" mode: primary", "remotehost3:rh2adm> python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"2\": ACTIVE status system replication site \"1\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 3 site name: DC3 echo USD? 15", "clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode", "siteReplicationMode/DC1=primary siteReplicationMode/DC3=async siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay", "clusternode1:rh2adm> watch \"python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \\USD?\"", "remotehost3:rh2adm>watch \"python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \\USD?\"", "clusternode2:rh2adm> watch \"hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode\"", "clusternode1:rh2adm> hdbnsutil -sr_takeover done.", "Every 2.0s: python systemReplicationStatus.py; echo USD? clusternode1: Mon Sep 4 23:34:30 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 15", "Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC2=logreplay", "clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done.", "pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Stopped (disabled, unmanaged)", "pcs resource enable vip_RH2_02_MASTER Warning: 'vip_RH2_02_MASTER' is unmanaged", "watch pcs status --full", "clusternode1:rh2adm> watch \"python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo USD?\"", "pcs property set maintenance-mode=false", "Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023 Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:01:17 2023 * Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693872030 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "con_cluster_cleanupclusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode'", "remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online", "Every 5.0s: python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD? clusternode1: Tue Sep 5 00:14:40 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |ASYNC |ACTIVE | | True | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"3\": ACTIVE status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 Status 15", "Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay", "clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode", "siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay", "pcs status --full| grep sync_state", "* hana_rh2_sync_state : PRIM * hana_rh2_sync_state : SOK", "pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:18:52 2023 * Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693873014 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_test_cases_configuring-hana-scale-up-multitarget-system-replication-disaster-recovery
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1]
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object MutatingWebhook describes an admission webhook and the resources and operations it applies to. 4.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 4.1.2. .webhooks[] Description MutatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. reinvocationPolicy string reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are "Never" and "IfNeeded". Never: the webhook will not be called more than once in a single admission evaluation. IfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option must be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead. Defaults to "Never". rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 4.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 4.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 4.1.5. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 4.1.6. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 4.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations DELETE : delete collection of MutatingWebhookConfiguration GET : list or watch objects of kind MutatingWebhookConfiguration POST : create a MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations GET : watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} DELETE : delete a MutatingWebhookConfiguration GET : read the specified MutatingWebhookConfiguration PATCH : partially update the specified MutatingWebhookConfiguration PUT : replace the specified MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} GET : watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MutatingWebhookConfiguration Table 4.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.3. Body parameters Parameter Type Description body DeleteOptions schema Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind MutatingWebhookConfiguration Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MutatingWebhookConfiguration Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.8. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 202 - Accepted MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.2. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations Table 4.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MutatingWebhookConfiguration Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MutatingWebhookConfiguration Table 4.17. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MutatingWebhookConfiguration Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.19. Body parameters Parameter Type Description body Patch schema Table 4.20. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MutatingWebhookConfiguration Table 4.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.22. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.23. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.4. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} Table 4.24. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration Table 4.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/extension_apis/mutatingwebhookconfiguration-admissionregistration-k8s-io-v1
Chapter 9. Providing public access to an instance
Chapter 9. Providing public access to an instance New instances automatically receive a port with a fixed IP address on the network that the instance is assigned to. This IP address is private and is permanently associated with the instance until the instance is deleted. The fixed IP address is used for communication between instances. You can connect a public instance directly to a shared external network where a public IP address is directly assigned to the instance. This is useful if you are working in a private cloud. You can also provide public access to an instance through a project network that has a routed connection to an external provider network. This is the preferred method if you are working in a public cloud, or when public IP addresses are limited. To provide public access through the project network, the project network must be connected to a router with the gateway set to the external network. For external traffic to reach the instance, the cloud user must associate a floating IP address with the instance. To provide access to and from an instance, whether it is connected to a shared external network or a routed provider network, you must use a security group with the required protocols, such as SSH, ICMP, or HTTP. You must also pass a key pair to the instance during creation, so that you can access the instance remotely. 9.1. Prerequisites The external network must have a subnet to provide the floating IP addresses. The project network must be connected to a router that has the external network configured as the gateway. A security group with the required protocols must be available for your project. For more information see Configuring security groups in Configuring Red Hat OpenStack Platform networking . 9.2. Securing instance access with security groups and key pairs Security groups are sets of IP filter rules that control network and protocol access to and from instances, such as ICMP to allow you to ping an instance, and SSH to allow you to connect to an instance. All projects have a default security group called default , which is used when you do not specify a security group for your instances. By default, the default security group allows all outgoing traffic and denies all incoming traffic from any source other than instances in the same security group. You can apply one or more security groups to an instance during instance creation. To apply a security group to a running instance, apply the security group to a port attached to the instance. For more information on security groups, see Configuring security groups in Configuring Red Hat OpenStack Platform networking . Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Key pairs are SSH or x509 credentials that are injected into an instance when it is launched to enable remote access to the instance. You can create new key pairs in RHOSP, or import existing key pairs. Each user should have at least one key pair. The key pair can be used for multiple instances. Note You cannot share key pairs between users in a project because each key pair belongs to the individual user that created or imported the key pair, rather than to the project. 9.2.1. Adding a security group to a port The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group to a port on a running instance. Procedure Determine the port on the instance that you want to apply the security group to: Apply the security group to the port: Replace <sec_group> with the name or ID of the security group you want to apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 9.2.2. Removing a security group from a port To remove a security group from a port you need to first remove all the security groups, then re-add the security groups that you want to remain assigned to the port. Procedure List all the security groups associated with the port and record the IDs of the security groups that you want to remain associated with the port: Remove all the security groups associated with the port: Re-apply the security groups to the port: Replace <sec_group> with the ID of the security group that you want to re-apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 9.2.3. Generating a new SSH key pair You can create a new SSH key pair for use within your project. Note Use a x509 certificate to create a key pair for a Windows instance. Procedure Create the key pair and save the private key in your local .ssh directory: Replace <keypair> with the name of your new key pair. Protect the private key: 9.2.4. Importing an existing SSH key pair You can import an SSH key to your project that you created outside of the Red Hat OpenStack Platform (RHOSP) by providing the public key file when you create a new key pair. Procedure Create the key pair from the existing key file and save the private key in your local .ssh directory: To import the key pair from an existing public key file, enter the following command: Replace <public_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. To import the key pair from an existing private key file, enter the following command: Replace <private_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. Protect the private key: 9.2.5. Additional resources Configuring security groups in Configuring Red Hat OpenStack Platform networking . Project security management in Managing OpenStack Identity resources . 9.3. Assigning a floating IP address to an instance You can assign a public floating IP address to an instance to enable communication with networks outside the cloud, including the Internet. The cloud administrator configures the available pool of floating IP addresses for an external network. You can allocate a floating IP address from this pool to your project, then associate the floating IP address with your instance. Projects have a limited quota of floating IP addresses that can be used by instances in the project, 50 by default. Therefore, release IP addresses for reuse when you no longer need them. Prerequisites The instance must be on an external network, or on a project network that is connected to a router that has the external network configured as the gateway. The external network that the instance will connect to must have a subnet to provide the floating IP addresses. Procedure Check the floating IP addresses that are allocated to the current project: If there are no floating IP addresses available that you want to use, allocate a floating IP address to the current project from the external network allocation pool: Replace <provider-network> with the name or ID of the external network that you want to use to provide external access. Tip By default, a floating IP address is randomly allocated from the pool of the external network. A cloud administrator can use the --floating-ip-address option to allocate a specific floating IP address from an external network. Assign the floating IP address to an instance: Replace <instance> with the name or ID of the instance that you want to provide public access to. Replace <floating_ip> with the floating IP address that you want to assign to the instance. Optional: Replace <ip_address> with the IP address of the interface that you want to attach the floating IP to. By default, this attaches the floating IP address to the first port. Verify that the floating IP address has been assigned to the instance: Additional resources Creating floating IP pools in the Configuring Red Hat OpenStack Platform networking guide. 9.4. Disassociating a floating IP address from an instance When the instance no longer needs public access, disassociate it from the instance and return it to the allocation pool. Procedure Disassociate the floating IP address from the instance: Replace <instance> with the name or ID of the instance that you want to remove public access from. Replace <floating_ip> with the floating IP address that is assigned to the instance. Release the floating IP address back into the allocation pool: Confirm the floating IP address is deleted and is no longer available for assignment: 9.5. Creating an instance with SSH access You can provide SSH access to an instance by specifying a key pair when you create the instance. Key pairs are SSH or x509 credentials that are injected into an instance when it is launched. Each project should have at least one key pair. A key pair belongs to an individual user, not to a project. Note You cannot associate a key pair with an instance after the instance has been created. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Prerequisites A key pair is available that you can use to SSH into your instances. For more information, see Generating a new SSH key pair . The network that you plan to create your instance on must be an external network, or a project network connected to a router that has the external network configured as the gateway. For more information, see Adding a router in the Configuring Red Hat OpenStack Platform networking guide. The external network that the instance connects to must have a subnet to provide the floating IP addresses. The security group allows SSH access to instances. For more information, see Securing instance access with security groups and key pairs . The image that the instance is based on contains the cloud-init package to inject the SSH public key into the instance. A floating IP address is available to assign to your instance. For more information, see Assigning a floating IP address to an instance . Procedure Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Note Choose a flavor with sufficient size for the image to successfully boot, otherwise the instance will fail to launch. Retrieve the name or ID of the image that has the software profile that your instance requires: If the image you require is not available, you can download or create a new image. For information about creating or downloading cloud images, see Creating images . Retrieve the name or ID of the network that you want to connect your instance to: Retrieve the name of the key pair that you want to use to access your instance remotely: Create your instance with SSH access: Replace <flavor> with the name or ID of the flavor that you retrieved in step 1. Replace <image> with the name or ID of the image that you retrieved in step 2. Replace <network> with the name or ID of the network that you retrieved in step 3. You can use the --network option more than once to connect your instance to several networks, as required. Optional: The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group directly to the instance during instance creation, or to a port on the running instance. Use the --security-group option to specify an alternative security group when creating the instance. For information on adding a security group to a port on a running instance, see Adding a security group to a port . Replace <keypair> with the name or ID of the key pair that you retrieved in step 4. Assign a floating IP address to the instance: Replace <floating_ip> with the floating IP address that you want to assign to the instance. Use the automatically created cloud-user account to verify that you can log in to your instance by using SSH: 9.6. Additional resources Creating a network in Configuring Red Hat OpenStack Platform networking . Adding a router in Configuring Red Hat OpenStack Platform networking . Configuring security groups in Configuring Red Hat OpenStack Platform networking .
[ "openstack port list --server myInstancewithSSH", "openstack port set --security-group <sec_group> <port>", "openstack port show <port>", "openstack port set --no-security-group <port>", "openstack port set --security-group <sec_group> <port>", "openstack keypair create <keypair> > ~/.ssh/<keypair>.pem", "chmod 600 ~/.ssh/<keypair>.pem", "openstack keypair create --public-key ~/.ssh/<public_key>.pub <keypair> > ~/.ssh/<keypair>.pem", "openstack keypair create --private-key ~/.ssh/<private_key> <keypair> > ~/.ssh/<keypair>.pem", "chmod 600 ~/.ssh/<keypair>.pem", "openstack floating ip list", "openstack floating ip create <provider-network>", "openstack server add floating ip [--fixed-ip-address <ip_address>] <instance> <floating_ip>", "openstack server show <instance>", "openstack server remove floating ip <instance> <ip_address>", "openstack floating ip delete <ip_address>", "openstack floating ip list", "openstack flavor list", "openstack image list", "openstack network list", "openstack keypair list", "openstack server create --flavor <flavor> --image <image> --network <network> [--security-group <secgroup>] --key-name <keypair> --wait myInstancewithSSH", "openstack server add floating ip myInstancewithSSH <floating_ip>", "ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/assembly_providing-public-access-to-an-instance_instances
Chapter 48. mapping
Chapter 48. mapping This chapter describes the commands under the mapping command. 48.1. mapping create Create new mapping Usage: Table 48.1. Positional arguments Value Summary <name> New mapping name (must be unique) Table 48.2. Command arguments Value Summary -h, --help Show this help message and exit --rules <filename> Filename that contains a set of mapping rules (required) Table 48.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 48.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 48.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 48.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 48.2. mapping delete Delete mapping(s) Usage: Table 48.7. Positional arguments Value Summary <mapping> Mapping(s) to delete Table 48.8. Command arguments Value Summary -h, --help Show this help message and exit 48.3. mapping list List mappings Usage: Table 48.9. Command arguments Value Summary -h, --help Show this help message and exit Table 48.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 48.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 48.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 48.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 48.4. mapping set Set mapping properties Usage: Table 48.14. Positional arguments Value Summary <name> Mapping to modify Table 48.15. Command arguments Value Summary -h, --help Show this help message and exit --rules <filename> Filename that contains a new set of mapping rules 48.5. mapping show Display mapping details Usage: Table 48.16. Positional arguments Value Summary <mapping> Mapping to display Table 48.17. Command arguments Value Summary -h, --help Show this help message and exit Table 48.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 48.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 48.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 48.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack mapping create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --rules <filename> <name>", "openstack mapping delete [-h] <mapping> [<mapping> ...]", "openstack mapping list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack mapping set [-h] [--rules <filename>] <name>", "openstack mapping show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <mapping>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/mapping
Chapter 8. Known Issues
Chapter 8. Known Issues This chapter documents known problems in Red Hat Enterprise Linux 7.9. 8.1. Authentication and Interoperability Trusts with Active Directory do not work properly after upgrading ipa-server using the latest container image After upgrading an IdM server with the latest version of the container image, existing trusts with Active Directory domains no longer work. To work around this problem, delete the existing trust and re-establish it after the upgrade. ( BZ#1819745 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) 8.2. Compiler and Tools GCC thread sanitizer included in RHEL no longer works Due to incompatible changes in kernel memory mapping, the thread sanitizer included with the GNU C Compiler (GCC) compiler version in RHEL no longer works. Additionally, the thread sanitizer cannot be adapted to the incompatible memory layout. As a result, it is no longer possible to use the GCC thread sanitizer included with RHEL. As a workaround, use the version of GCC included in Red Hat Developer Toolset to build code which uses the thread sanitizer. (BZ#1569484) 8.3. Installation and Booting Systems installed as Server with GUI with the DISA STIG profile or with the CIS profile do not start properly The DISA STIG profile and the CIS profile require the removal of the xorg-x11-server-common (X Windows) package but does not require the change of the default target. As a consequence, the system is configured to run the GUI but the X Windows package is missing. As a result, the system does not start properly. To work around this problem, do not use the DISA STIG profile and the CIS profile with the Server with GUI software selection or customize the profile by removing the package_xorg-x11-server-common_removed rule. ( BZ#1648162 ) 8.4. Kernel The radeon driver fails to reset hardware correctly when performing kdump When booting the kernel from the currently running kernel, such as when performing the kdump process, the radeon kernel driver currently does not properly reset hardware. Instead, the kdump kernel terminates unexpectedly, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Afterwards, restart the machine and kdump . Note that in this scenario, no graphics will be available during kdump , but kdump will complete successfully. (BZ#1168430) Slow connection to RHEL 7 guest console on a Windows Server 2019 host When using RHEL 7 as a guest operating system in multi-user mode on a Windows Server 2019 host, connecting to a console output of the guest currently takes significantly longer than expected. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host. (BZ#1706522) Kernel deadlocks can occur when dm_crypt is used with intel_qat The intel_qat kernel module uses the GFP_ATOMIC memory allocations, which can fail under memory stress. Consequently, kernel deadlocks and possible data corruption can occur when the dm_crypt kernel module uses intel_qat for encryption offload. To work around this problem, you can choose either of the following: Update to RHEL 8 Avoid using intel_qat for encryption offload (potential performance impact) Ensure the system does not get under excessive memory pressure (BZ#1813394) The vmcore file generation fails on Amazon c5a machines on RHEL 7 On Amazon c5a machines, the Advanced Programmable Interrupt Controller (APIC) fails to route the interrupts of the Local APIC (LAPIC), when configured in the flat mode inside the kdump kernel. As a consequence, the kdump kernel fails to boot and prevents the kdump kernel from saving the vmcore file for further analysis. To work around the problem: Increase the crash kernel size by setting the crashkernel argument to 256M : Set the nr_cpus=9 option by editing the /etc/sysconfig/kdump file: As a result, the kdump kernel boots with 9 CPUs and the vmcore file is captured upon kernel crash. Note that the kdump service can use a significant amount of crash kernel memory to dump the vmcore file since it enables 9 CPUs in the kdump kernel. Therefore, ensure that the crash kernel has a size reserve of 256MB available for booting the kdump kernel. (BZ#1844522) Enabling some kretprobes can trigger kernel panic Using kretprobes of the following functions can cause CPU hard-lock: _raw_spin_lock _raw_spin_lock_irqsave _raw_spin_unlock_irqrestore queued_spin_lock_slowpath As a consequence, enabling these kprobe events, you can experience a system response failure. This situation triggers a kernel panic. To workaround this problem, avoid configuring kretprobes for mentioned functions and prevent system response failure. (BZ#1838903) The kdump service fails on UEFI Secure Boot enabled systems If a UEFI Secure Boot enabled system boots with a not up-to-date RHEL kernel version, the kdump service fails to start. In the described scenario, kdump reports the following error message: This behavior displays due to either of these: Booting the crash kernel with a not up-to-date kernel version. Configuring the KDUMP_KERNELVER variable in /etc/sysconfig/kdump file to a not up-to-date kernel version. As a consequence, kdump fails to start and hence no dump core is saved during the crash event. To workaround this problem, use either of these: Boot the crash kernel with the latest RHEL 7 fixes. Configure KDUMP_KERNELVER in etc/sysconfig/kdump to use the latest kernel version. As a result, kdump starts successfully in the described scenario. (BZ#1862840) The RHEL installer might not detect iSCSI storage The RHEL installer might not automatically set kernel command-line options related to iSCSI for some offloading iSCSI host bus adapters (HBAs). As a consequence, the RHEL installer might not detect iSCSI storage. To work around the problem, add the following options to the kernel command line when booting to the installer: These options enable network configuration and iSCSI target discovery from the pre-OS firmware configuration. The firmware configures the iSCSI storage, and as a result, the installer can discover and use the iSCSI storage. (BZ#1871027) Race condition in the mlx5e_rep_neigh_update work queue sometimes triggers the kernel panic When offloading encapsulation actions over the mlx5 device using the switchdev in-kernel driver model in the Single Root I/O Virtualization (SR-IOV) capability, a race condition can happen in the mlx5e_rep_neigh_update work queue. Consequently, the system terminates unexpectedly with the kernel panic and the following message appears: Currently, a workaround or partial mitigation to this problem is not known. (BZ#1874101) The ice driver does not load for Intel(R) network adapters The ice kernel driver does not load for all Intel(R) Ethernet network adapters E810-XXV except the following: v00008086d00001593sv*sd*bc*sc*i* v00008086d00001592sv*sd*bc*sc*i* v00008086d00001591sv*sd*bc*sc*i* Consequently, the network adapter remains undetected by the operating system. To work around this problem, you can use external drivers for RHEL 7 provided by Intel(R) or Dell. (BZ#1933998) kdump does not support setting nr_cpus to 2 or higher in Hyper-V virtual machines When using RHEL 7.9 as a guest operating system on a Microsoft Hyper-V hypervisor, the kdump kernel in some cases becomes unresponsive when the nr_cpus parameter is set to 2 or higher. To avoid this problem from occurring, do not change the default nr_cpus=1 parameter in the /etc/sysconfig/kdump file of the guest. ( BZ#1773478 ) 8.5. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) bind-utils DNS lookup utilities support fewer search domains than glibc The dig , host , and nslookup DNS lookup utilities from the bind-utils package support only up to 8 search domains, while the glibc resolver in the system supports any number of search domains. As a consequence, the DNS lookup utilities may get different results than applications when a search in the /etc/resolv.conf file contains more than 8 domains. To work around this problem, use one of the following: Full names ending with a dot, or Fewer than nine domains in the resolv.conf search clause. Note that it is not recommended to use more than three domains. ( BZ#1758317 ) BIND 9.11 changes log severity of query errors when query logging is enabled With the BIND 9.11 update, the log severity for the query-errors changes from debug 1 to info when query logging is enabled. Consequently, additional log entries describing errors now appear in the query log. To work around this problem, add the following statement into the logging section of the /etc/named.conf file: This will move query errors back into the debug log. Alternatively, use the following statement to discard all query error messages: As a result, only name queries are logged in a similar way to the BIND 9.9.4 release. (BZ#1853191) named-chroot service fails to start when check-names option is not allowed in forward zone Previously, the usage of the check-names option was allowed in the forward zone definitions. With the rebase to bind 9.11, only the following zone types: master slave stub hint use the check-names statement. Consequently, the check-names option, previously allowed in the forward zone definitions, is no longer accepted and causes a failure on start of the named-chroot service. To work around this problem, remove the check-names option from all the zone types except for master , slave , stub or hint . As a result, the named-chroot service starts again without errors. Note that the ignored statements will not change the provided service. (BZ#1851836) The NFQUEUE target overrides queue-cpu-fanout flag iptables NFQUEUE target using --queue-bypass and --queue-cpu-fanout options accidentally overrides the --queue-cpu-fanout option if ordered after the --queue-bypass option. Consequently, the --queue-cpu-fanout option is ignored. To work around this problem, rearrange the --queue-bypass option before --queue-cpu-fanout option. ( BZ#1851944 ) 8.6. Security Audit executable watches on symlinks do not work File monitoring provided by the -w option cannot directly track a path. It has to resolve the path to a device and an inode to make a comparison with the executed program. A watch monitoring an executable symlink monitors the device and an inode of the symlink itself instead of the program executed in memory, which is found from the resolution of the symlink. Even if the watch resolves the symlink to get the resulting executable program, the rule triggers on any multi-call binary called from a different symlink. This results in flooding logs with false positives. Consequently, Audit executable watches on symlinks do not work. To work around the problem, set up a watch for the resolved path of the program executable, and filter the resulting log messages using the last component listed in the comm= or proctitle= fields. (BZ#1421794) Executing a file while transitioning to another SELinux context requires additional permissions Due to the backport of the fix for CVE-2019-11190 in RHEL 7.8, executing a file while transitioning to another SELinux context requires more permissions than in releases. In most cases, the domain_entry_file() interface grants the newly required permission to the SELinux domain. However, in case the executed file is a script, then the target domain may lack the permission to execute the interpreter's binary. This lack of the newly required permission leads to AVC denials. If SELinux is running in enforcing mode, the kernel might kill the process with the SIGSEGV or SIGKILL signal in such a case. If the problem occurs on the file from the domain which is a part of the selinux-policy package, file a bug against this component. In case it is part of a custom policy module, Red Hat recommends granting the missing permissions using standard SELinux interfaces: corecmd_exec_shell() for shell scripts corecmd_exec_all_executables() for interpreters labeled as bin_t such as Perl or Python For more details, see the /usr/share/selinux/devel/include/kernel/corecommands.if file provided by the selinux-policy-doc package and the An exception that breaks the stability of the RHEL SELinux policy API article on the Customer Portal. (BZ#1832194) Scanning large numbers of files with OpenSCAP causes systems to run out of memory The OpenSCAP scanner stores all collected results in the memory until the scan finishes. As a consequence, the system might run out of memory on systems with low RAM when scanning large numbers of files, for example, from the large package groups Server with GUI and Workstation . To work around this problem, use smaller package groups, for example, Server and Minimal Install on systems with limited RAM. If your scenario requires large package groups, you can test whether your system has sufficient memory in a virtual or staging environment. Alternatively, you can tailor the scanning profile to deselect rules that involve recursion over the entire / filesystem: rpm_verify_hashes rpm_verify_permissions rpm_verify_ownership file_permissions_unauthorized_world_writable no_files_unowned_by_user dir_perms_world_writable_system_owned file_permissions_unauthorized_suid file_permissions_unauthorized_sgid file_permissions_ungroupowned dir_perms_world_writable_sticky_bits This prevents the OpenSCAP scanner from causing the system to run out of memory. ( BZ#1829782 ) RSA signatures with SHA-1 cannot be completely disabled in RHEL7 Because the ssh-rsa signature algorithm must be allowed in OpenSSH to use the new SHA2 ( rsa-sha2-512 , rsa-sha2-256 ) signatures, you cannot completely disable SHA1 algorithms in RHEL7. To work around this limitation, you can update to RHEL8 or use ECDSA/Ed25519 keys, which use only SHA2. ( BZ#1828598 ) rpm_verify_permissions fails in the CIS profile The rpm_verify_permissions rule compares file permissions to package default permissions. However, the Center for Internet Security (CIS) profile, which is provided by the scap-security-guide packages, changes some file permissions to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions fails. To work around this problem, manually verify that these files have the following permissions: /etc/cron.d (0700) /etc/cron.hourly (0700) /etc/cron.monthly (0700) /etc/crontab (0600) /etc/cron.weekly (0700) /etc/cron.daily (0700) For more information about the related feature, see SCAP Security Guide now provides a profile aligned with the CIS RHEL 7 Benchmark v2.2.0 . ( BZ#1838622 ) OpenSCAP file ownership-related rules do not work with remote user and group back ends The OVAL language used by the OpenSCAP suite to perform configuration checks has a limited set of capabilities. It lacks possibilities to obtain a complete list of system users, groups, and their IDs if some of them are remote. For example, if they are stored in an external database such as LDAP. As a consequence, rules that work with user IDs or group IDs do not have access to IDs of remote users. Therefore, such IDs are identified as foreign to the system. This might result in scans to fail on compliant systems. In the scap-security-guide packages, the following rules are affected: xccdf_org.ssgproject.content_rule_file_permissions_ungroupowned xccdf_org.ssgproject.content_rule_no_files_unowned_by_user To work around this problem, if a rule that deals with user or group IDs fails on a system that defines remote users, check the failed parts manually. The OpenSCAP scanner enables you to specify the --oval-results option together with the --report option. This option displays offending files and UIDs in the HTML report and makes the manual revision process straightforward. Additionally, in RHEL 8.3, the rules in the scap-security-guide packages contain a warning that only local-user back ends have been evaluated. ( BZ#1721439 ) rpm_verify_permissions and rpm_verify_ownership fail in the Essential Eight profile The rpm_verify_permissions rule compares file permissions to package default permissions and the rpm_verify_ownership rule compares file owner to package default owner. However, the Australian Cyber Security Centre (ACSC) Essential Eight profile, which is provided by the scap-security-guide packages, changes some file permissions and ownerships to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions and rpm_verify_ownership fails. To work around this problem, manually verify that the /usr/libexec/abrt-action-install-debuginfo-to-abrt-cache file is owned by root and that it has suid and sgid bits set. ( BZ#1778661 ) 8.7. Servers and Services The compat-unixODBC234 package for SAP requires a symlink to load the unixODBC library The unixODBC package version 2.3.1 is available in RHEL 7. In addition, the compat-unixODBC234 package version 2.3.4 is available in the RHEL 7 for SAP Solutions sap-hana repository; see New package: compat-unixODBC234 for SAP for details. Due to minor ABI differences between unixODBC version 2.3.1 and 2.3.4, an application built with version 2.3.1 might not work with version 2.3.4 in certain rare cases. To prevent problems caused by this incompatibility, the compat-unixODBC234 package uses a different SONAME for shared libraries available in this package, and the library file is available under /usr/lib64/libodbc.so.1002.0.0 instead of /usr/lib64/libodbc.so.2.0.0 . As a consequence, third party applications built with unixODBC version 2.3.4 that load the unixODBC library in runtime using the dlopen() function fail to load the library with the following error message: To work around this problem, create the following symbolic link: and similar symlinks for other libraries from the compat-unixODBC234 package if necessary. Note that the compat-unixODBC234 package conflicts with the base RHEL 7 unixODBC package. Therefore, uninstall unixODBC prior to installing compat-unixODBC234 . (BZ#1844443) Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. With this update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. (BZ#1739287) 8.8. Storage RHEL 7 does not support VMD 2.0 storage The 10th generation Intel Core and 3rd generation Intel Xeon Scalable platforms (also known as Intel Ice Lake) include hardware that utilizes version 2.0 of the Volume Management Device (VMD) technology. RHEL 7 no longer receives updates to support new hardware. As a consequence, RHEL 7 cannot recognize Non-Volatile Memory Express (NVMe) devices that are managed by VMD 2.0. To work around the problem, Red Hat recommends that you upgrade to a recent major RHEL release. (BZ#1942865) SCSI devices cannot be deleted after removing the iSCSI target If a SCSI device is BLOCKED due to a transport issue, including an iSCSI session being disrupted due to a network or target side configuration change, the attached devices cannot be deleted while blocked on transport error recovery. If you attempt to remove the SCSI device using the delete sysfs command ( /sys/block/sd*/device/delete ) it can be blocked indefinitely. To work around this issue, terminate the transport session with the iscsiadm logout commands in either session mode (specifying a session ID) or in node mode (specifying a matching target name and portal for the blocked session). Issuing an iSCSI session logout on a recovering session terminates the session and removes the SCSI devices. (BZ#1439055) 8.9. System and Subscription Management The needs-restarting command from yum-utils might fail to display the container boot time In certain RHEL 7 container environments, the needs-restarting command from the yum-utils package might incorrectly display the host boot time instead of the container boot time. As a consequence, this command might still report a false reboot warning message after you restart the container environment. You can safely ignore this harmless warning message in such a case. ( BZ#2042313 ) 8.10. Virtualization RHEL 7.9 virtual machines on IBM POWER sometimes do not detect hot-plugged devices RHEL7.9 virtual machines (VMs) started on an IBM POWER system on a RHEL 8.3 or later hypervisor do not detect hot-plugged PCI devices if the hot plug is performed when the VM is not fully booted yet. To work around the problem, reboot the VM. (BZ#1854917) 8.11. RHEL in cloud environments Core dumping RHEL 7 virtual machines that use NICs with enabled accelerated networking to a remote machine on Azure fails Currently, using the kdump utility to save the core dump file of a RHEL 7 virtual machine (VM) on a Microsoft Azure hypervisor to a remote machine does not work correctly when the VM is using a NIC with enabled accelerated networking. As a consequence, the kdump operation fails. To prevent this problem from occurring, add the following line to the /etc/kdump.conf file and restart the kdump service. (BZ#1846667) SSH with password login now impossible by default on RHEL 8 virtual machines configured using cloud-init For security reasons, the ssh_pwauth option in the configuration of the cloud-init utility is now set to 0 by default. As a consequence, it is not possible to use a password login when connecting via SSH to RHEL 8 virtual machines (VMs) configured using cloud-init . If you require using a password login for SSH connections to your RHEL 8 VMs configured using cloud-init , set ssh_pwauth: 1 in the /etc/cloud/cloud.cfg file before deploying the VM. (BZ#1685580)
[ "dracut_args --omit-drivers \"radeon\"", "grubby-args=\"crashkernel=256M\" --update-kernel /boot/vmlinuz-`uname -r`", "KDUMP_COMMANDLINE_APPEND=\"irqpoll\" *nr_cpus=9* reset_devices cgroup_disable=memory mce=off numa=off udev.children- max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable", "kexec_file_load failed: Required key not available", "rd.iscsi.ibft=1 rd.iscsi.firmware=1", "Workqueue: mlx5e mlx5e_rep_neigh_update [mlx5_core]", "Environment=OPENSSL_ENABLE_MD5_VERIFY=1", "category query-errors { default_debug; };", "category querry-errors { null; };", "/usr/lib64/libodbc.so.2.0.0: cannot open shared object file: No such file or directory", "ln -s /usr/lib64/libodbc.so.1002.0.0 /usr/lib64/libodbc.so.2.0.0", "extra_modules pci_hyperv" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/known_issues
Chapter 46. Getting Started with the Framework
Chapter 46. Getting Started with the Framework Abstract This chapter explains the basic principles of implementing a Camel component using the API component framework, based on code generated using the camel-archetype-api-component Maven archetype. 46.1. Generate Code with the Maven Archetype Maven archetypes A Maven archetype is analogous to a code wizard: given a few simple parameters, it generates a complete, working Maven project, populated with sample code. You can then use this project as a template, customizing the implementation to create your own application. The API component Maven archetype The API component framework provides a Maven archetype, camel-archetype-api-component , that can generate starting point code for your own API component implementation. This is the recommended approach to start creating your own API component. Prerequisites The only prerequisites for running the camel-archetype-api-component archetype are that Apache Maven is installed and the Maven settings.xml file is configured to use the standard Fuse repositories. Invoke the Maven archetype To create an Example component, which uses the example URI scheme, invoke the camel-archetype-api-component archetype to generate a new Maven project, as follows: Note The backslash character, \ , at the end of each line represents line continuation, which works only on Linux and UNIX platforms. On Windows platforms, remove the backslash and put the arguments all on a single line. Options Options are provided to the archetype generation command using the syntax, -D Name = Value . Most of the options should be set as shown in the preceding mvn archetype:generate command, but a few of the options can be modified, to customize the generated project. The following table shows the options that you can use to customize the generated API component project: Name Description groupId (Generic Maven option) Specifies the group ID of the generated Maven project. By default, this value also defines the Java package name for the generated classes. Hence, it is a good idea to choose this value to match the Java package name that you want. artifactId (Generic Maven option) Specifies the artifact ID of the generated Maven project. name The name of the API component. This value is used for generating class names in the generated code (hence, it is recommended that the name should start with a capital letter). scheme The default scheme to use in URIs for this component. You should make sure that this scheme does not conflict with the scheme of any existing Camel components. archetypeVersion (Generic Maven option) Ideally, this should be the Apache Camel version used by the container where you plan to deploy the component. If necessary, however, you can also modify the versions of Maven dependencies after you have generated the project. Structure of the generated project Assuming that the code generation step completes successfully, you should see a new directory, camel-api-example , which contains the new Maven project. If you look inside the camel-api-example directory, you will see that it has the following general structure: At the top level of the project is an aggregate POM, pom.xml , which is configured to build two sub-projects, as follows: camel-api-example-api The API sub-project (named as ArtifactId -api ) holds the Java API which you are about to turn into a component. If you are basing the API component on a Java API that you wrote yourself, you can put the Java API code directly into this project. The API sub-project can be used for one or more of the following purposes: To package up the Java API code (if it is not already available as a Maven package). To generate Javadoc for the Java API (providing the needed metadata for the API component framework). To generate the Java API code from an API description (for example, from a WADL description of a REST API). In some cases, however, you might not need to perform any of these tasks. For example, if the API component is based on a third-party API, which already provides the Java API and Javadoc in a Maven package. In such cases, you can delete the API sub-project. camel-api-example-component The component sub-project (named as ArtifactId -component ) holds the implementation of the new API component. This includes the component implementation classes and the configuration of the camel-api-component-maven plug-in (which generates the API mapping classes from the Java API). 46.2. Generated API Sub-Project Overview Assuming that you generated a new Maven project as described in Section 46.1, "Generate Code with the Maven Archetype" , you can now find a Maven sub-project for packaging the Java API under the camel-api-example/camel-api-example-api project directory. In this section, we take a closer look at the generated example code and describe how it works. Sample Java API The generated example code includes a sample Java API, on which the example API component is based. The sample Java API is relatively simple, consisting of just two Hello World classes: ExampleJavadocHello and ExampleFileHello . ExampleJavadocHello class Example 46.1, "ExampleJavadocHello class" shows the ExampleJavadocHello class from the sample Java API. As the name of the class suggests, this particular class is used to show how you can supply mapping metadata from Javadoc. Example 46.1. ExampleJavadocHello class ExampleFileHello class Example 46.2, "ExampleFileHello class" shows the ExampleFileHello class from the sample Java API. As the name of the class suggests, this particular class is used to show how you can supply mapping metadata from a signature file. Example 46.2. ExampleFileHello class Generating the Javadoc metadata for ExampleJavadocHello Because the metadata for ExampleJavadocHello is provided as Javadoc, it is necessary to generate Javadoc for the sample Java API and install it into the camel-api-example-api Maven artifact. The API POM file, camel-api-example-api/pom.xml , configures the maven-javadoc-plugin to perform this step automatically during the Maven build. 46.3. Generated Component Sub-Project Overview The Maven sub-project for building the new component is located under the camel-api-example/camel-api-example-component project directory. In this section, we take a closer look at the generated example code and describe how it works. Providing the Java API in the component POM The Java API must be provided as a dependency in the component POM. For example, the sample Java API is defined as a dependency in the component POM file, camel-api-example-component/pom.xml , as follows: Providing the Javadoc metadata in the component POM If you are using Javadoc metadata for all or part of the Java API, you must provide the Javadoc as a dependency in the component POM. There are two things to note about this dependency: The Maven coordinates for the Javadoc are almost the same as for the Java API, except that you must also specify a classifier element, as follows: You must declare the Javadoc to have provided scope, as follows: For example, in the component POM, the Javadoc dependency is defined as follows: Defining the file metadata for Example File Hello The metadata for ExampleFileHello is provided in a signature file. In general, this file must be created manually, but it has quite a simple format, which consists of a list of method signatures (one on each line). The example code provides the signature file, file-sig-api.txt , in the directory, camel-api-example-component/signatures , which has the following contents: For more details about the signature file format, see the section called "Signature file metadata" . Configuring the API mapping One of the key features of the API component framework is that it automatically generates the code to perform API mapping . That is, generating stub code that maps endpoint URIs to method invocations on the Java API. The basic inputs to the API mapping are: the Java API, the Javadoc metadata, and/or the signature file metadata. The component that performs the API mapping is the camel-api-component-maven-plugin Maven plug-in, which is configured in the component POM. The following extract from the component POM shows how the camel-api-component-maven-plugin plug-in is configured: The plug-in is configured by the configuration element, which contains a single apis child element to configure the classes of the Java API. Each API class is configured by an api element, as follows: apiName The API name is a short name for the API class and is used as the endpoint-prefix part of an endpoint URI. Note If the API consists of just a single Java class, you can leave the apiName element empty, so that the endpoint-prefix becomes redundant, and you can then specify the endpoint URI using the format shown in the section called "URI format for a single API class" . proxyClass The proxy class element specifies the fully-qualified name of the API class. fromJavadoc If the API class is accompanied by Javadoc metadata, you must indicate this by including the fromJavadoc element and the Javadoc itself must also be specified in the Maven file, as a provided dependency (see the section called "Providing the Javadoc metadata in the component POM" ). fromSignatureFile If the API class is accompanied by signature file metadata, you must indicate this by including the fromSignatureFile element, where the content of this element specifies the location of the signature file. Note The signature files do not get included in the final package built by Maven, because these files are needed only at build time, not at run time. Generated component implementation The API component consists of the following core classes (which must be implemented for every Camel component), under the camel-api-example-component/src/main/java directory: ExampleComponent Represents the component itself. This class acts as a factory for endpoint instances (for example, instances of ExampleEndpoint ). ExampleEndpoint Represents an endpoint URI. This class acts as a factory for consumer endpoints (for example, ExampleConsumer ) and as a factory for producer endpoints (for example, ExampleProducer ). ExampleConsumer Represents a concrete instance of a consumer endpoint, which is capable of consuming messages from the location specified in the endpoint URI. ExampleProducer Represents a concrete instance of a producer endpoint, which is capable of sending messages to the location specified in the endpoint URI. ExampleConfiguration Can be used to define endpoint URI options. The URI options defined by this configuration class are not tied to any specific API class. That is, you can combine these URI options with any of the API classes or methods. This can be useful, for example, if you need to declare username and password credentials in order to connect to the remote service. The primary purpose of the ExampleConfiguration class is to provide values for parameters required to instantiate API classes, or classes that implement API interfaces. For example, these could be constructor parameters, or parameter values for a factory method or class. To implement a URI option, option , in this class, all that you need to do is implement the pair of accessor methods, get Option and set Option . The component framework automatically parses the endpoint URI and injects the option values at run time. ExampleComponent class The generated ExampleComponent class is defined as follows: The important method in this class is createEndpoint , which creates new endpoint instances. Typically, you do not need to change any of the default code in the component class. If there are any other objects with the same life cycle as this component, however, you might want to make those objects available from the component class (for example, by adding a methods to create those objects or by injecting those objects into the component). ExampleEndpoint class The generated ExampleEndpoint class is defined as follows: In the context of the API component framework, one of the key steps performed by the endpoint class is to create an API proxy . The API proxy is an instance from the target Java API, whose methods are invoked by the endpoint. Because a Java API typically consists of many classes, it is necessary to pick the appropriate API class, based on the endpoint-prefix appearing in the URI (recall that a URI has the general form, scheme :// endpoint-prefix / endpoint ). ExampleConsumer class The generated ExampleConsumer class is defined as follows: ExampleProducer class The generated ExampleProducer class is defined as follows: ExampleConfiguration class The generated ExampleConfiguration class is defined as follows: To add a URI option, option , to this class, define a field of the appropriate type, and implement a corresponding pair of accessor methods, get Option and set Option . The component framework automatically parses the endpoint URI and injects the option values at run time. Note This class is used to define general URI options, which can be combined with any API method. To define URI options tied to a specific API method, configure extra options in the API component Maven plug-in. See Section 47.7, "Extra Options" for details. URI format Recall the general format of an API component URI: In general, a URI maps to a specific method invocation on the Java API. For example, suppose you want to invoke the API method, ExampleJavadocHello.greetMe("Jane Doe") , the URI would be constructed, as follows: scheme The API component scheme, as specified when you generated the code with the Maven archetype. In this case, the scheme is example . endpoint-prefix The API name, which maps to the API class defined by the camel-api-component-maven-plugin Maven plug-in configuration. For the ExampleJavadocHello class, the relevant configuration is: Which shows that the required endpoint-prefix is hello-javadoc . endpoint The endpoint maps to the method name, which is greetMe . Option1=Value1 The URI options specify method parameters. The greetMe(String name) method takes the single parameter, name , which can be specified as name=Jane%20Doe . If you want to define default values for options, you can do this by overriding the interceptProperties method (see Section 46.4, "Programming Model" ). Putting together the pieces of the URI, we see that we can invoke ExampleJavadocHello.greetMe("Jane Doe") with the following URI: Default component instance In order to map the example URI scheme to the default component instance, the Maven archetype creates the following file under the camel-api-example-component sub-project: This resource file is what enables the Camel core to identify the component associated with the example URI scheme. Whenever you use an example:// URI in a route, Camel searches the classpath to look for the corresponding example resource file. The example file has the following contents: This enables the Camel core to create a default instance of the ExampleComponent component. The only time you would need to edit this file is if you refactor the name of the component class. 46.4. Programming Model Overview In the context of the API component framework, the main component implementation classes are derived from base classes in the org.apache.camel.util.component package. These base classes define some methods which you can (optionally) override when you are implementing your component. In this section, we provide a brief description of those methods and how you might use them in your own component implementation. Component methods to implement In addition to the generated method implementations (which you usually do not need to modify), you can optionally override some of the following methods in the Component class: doStart() (Optional) A callback to create resources for the component during a cold start. An alternative approach is to adopt the strategy of lazy initialization (creating resources only when they are needed). In fact, lazy initialization is often the best strategy, so the doStart method is often not needed. doStop() (Optional) A callback to invoke code while the component is stopping. Stopping a component means that all of its resources are shut down, internal state is deleted, caches are cleared, and so on. Note Camel guarantees that doStop is always called when the current CamelContext shuts down, even if the corresponding doStart was never called. doShutdown (Optional) A callback to invoke code while the CamelContext is shutting down. Whereas a stopped component can be restarted (with the semantics of a cold start), a component that gets shut down is completely finished. Hence, this callback represents the last chance to free up any resources belonging to the component. What else to implement in the Component class? The Component class is the natural place to hold references to objects that have the same (or similar) life cycle to the component object itself. For example, if a component uses OAuth security, it would be natural to hold references to the required OAuth objects in the Component class and to define methods in the Component class for creating the OAuth objects. Endpoint methods to implement You can modify some of the generated methods and, optionally, override some inherited methods in the Endpoint class, as follows: afterConfigureProperties() The main thing you need to do in this method is to create the appropriate type of proxy class (API class), to match the API name. The API name (which has already been extracted from the endpoint URI) is available either through the inherited apiName field or through the getApiName accessor. Typically, you would do a switch on the apiName field to create the corresponding proxy class. For example: getApiProxy(ApiMethod method, Map<String, Object> args) Override this method to return the proxy instance that you created in afterConfigureProperties . For example: In special cases, you might want to make the choice of proxy dependent on the API method and arguments. The getApiProxy gives you the flexibility to take this approach, if required. doStart() (Optional) A callback to create resources during a cold start. Has the same semantics as Component.doStart() . doStop() (Optional) A callback to invoke code while the component is stopping. Has the same semantics as Component.doStop() . doShutdown (Optional) A callback to invoke code while the component is shutting down. Has the same semantics as Component.doShutdown() . interceptPropertyNames(Set<String> propertyNames) (Optional) The API component framework uses the endpoint URI and supplied option values to determine which method to invoke (ambiguity could be due to overloading and aliases). If the component internally adds options or method parameters, however, the framework might need help in order to determine the right method to invoke. In this case, you must override the interceptPropertyNames method and add the extra (hidden or implicit) options to the propertyNames set. When the complete list of method parameters are provided in the propertyNames set, the framework will be able to identify the right method to invoke. Note You can override this method at the level of the Endpoint , Producer or Consumer class. The basic rule is, if an option affects both producer endpoints and consumer endpoints, override the method in the Endpoint class. interceptProperties(Map<String,Object> properties) (Optional) By overriding this method, you can modify or set the actual values of the options, before the API method is invoked. For example, you could use this method to set default values for some options, if necessary. In practice, it is often necessary to override both the interceptPropertyNames method and the interceptProperty method. Note You can override this method at the level of the Endpoint , Producer or Consumer class. The basic rule is, if an option affects both producer endpoints and consumer endpoints, override the method in the Endpoint class. Consumer methods to implement You can optionally override some inherited methods in the Consumer class, as follows: interceptPropertyNames(Set<String> propertyNames) (Optional) The semantics of this method are similar to Endpoint.interceptPropertyNames interceptProperties(Map<String,Object> properties) (Optional) The semantics of this method are similar to Endpoint.interceptProperties doInvokeMethod(Map<String, Object> args) (Optional) Overriding this method enables you to intercept the invocation of the Java API method. The most common reason for overriding this method is to customize the error handling around the method invocation. For example, a typical approach to overriding doInvokeMethod is shown in the following code fragment: You should invoke doInvokeMethod on the super-class, at some point in this implementation, to ensure that the Java API method gets invoked. interceptResult(Object methodResult, Exchange resultExchange) (Optional) Do some additional processing on the result of the API method invocation. For example, you could add custom headers to the Camel exchange object, resultExchange , at this point. Object splitResult(Object result) (Optional) By default, if the result of the method API invocation is a java.util.Collection object or a Java array, the API component framework splits the result into multiple exchange objects (so that a single invocation result is converted into multiple messages). If you want to change the default behaviour, you can override the splitResult method in the consumer endpoint. The result argument contains the result of the API message invocation. If you want to split the result, you should return an array type. Note You can also switch off the default splitting behaviour by setting consumer.splitResult=false on the endpoint URI. Producer methods to implement You can optionally override some inherited methods in the Producer class, as follows: interceptPropertyNames(Set<String> propertyNames) (Optional) The semantics of this method are similar to Endpoint.interceptPropertyNames interceptProperties(Map<String,Object> properties) (Optional) The semantics of this method are similar to Endpoint.interceptProperties doInvokeMethod(Map<String, Object> args) (Optional) The semantics of this method are similar to Consumer.doInvokeMethod . interceptResult(Object methodResult, Exchange resultExchange) (Optional) The semantics of this method are similar to Consumer.interceptResult . Note The Producer.splitResult() method is never called, so it is not possible to split an API method result in the same way as you can for a consumer endpoint. To get a similar effect for a producer endpoint, you can use Camel's split() DSL command (one of the standard enterprise integration patterns) to split Collection or array results. Consumer polling and threading model The default threading model for consumer endpoints in the API component framework is scheduled poll consumer . This implies that the API method in a consumer endpoint is invoked at regular, scheduled time intervals. For more details, see the section called "Scheduled poll consumer implementation" . 46.5. Sample Component Implementations Overview Several of the components distributed with Apache Camel have been implemented with the aid of the API component framework. If you want to learn more about the techniques for implementing Camel components using the framework, it is a good idea to study the source code of these component implementations. Box.com The Camel Box component shows how to model and invoke the third party Box.com Java SDK using the API component framework. It also demonstrates how the framework can be adapted to customize consumer polling, in order to support Box.com's long polling API. GoogleDrive The Camel GoogleDrive component demonstrates how the API component framework can handle even Method Object style Google APIs. In this case, URI options are mapped to a method object, which is then invoked by overriding the doInvoke method in the consumer and the producer. Olingo2 The Camel Olingo2 component demonstrates how a callback-based Asynchronous API can be wrapped using the API component framework. This example shows how asynchronous processing can be pushed into underlying resources, like HTTP NIO connections, to make Camel endpoints more resource efficient.
[ "mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-api-component -DarchetypeVersion=2.23.2.fuse-7_13_0-00013-redhat-00001 -DgroupId=org.jboss.fuse.example -DartifactId=camel-api-example -Dname=Example -Dscheme=example -Dversion=1.0-SNAPSHOT -DinteractiveMode=false", "camel-api-example/ pom.xml camel-api-example-api/ camel-api-example-component/", "// Java package org.jboss.fuse.example.api; /** * Sample API used by Example Component whose method signatures are read from Javadoc. */ public class ExampleJavadocHello { public String sayHi() { return \"Hello!\"; } public String greetMe(String name) { return \"Hello \" + name; } public String greetUs(String name1, String name2) { return \"Hello \" + name1 + \", \" + name2; } }", "// Java package org.jboss.fuse.example.api; /** * Sample API used by Example Component whose method signatures are read from File. */ public class ExampleFileHello { public String sayHi() { return \"Hello!\"; } public String greetMe(String name) { return \"Hello \" + name; } public String greetUs(String name1, String name2) { return \"Hello \" + name1 + \", \" + name2; } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <dependencies> <dependency> <groupId>org.jboss.fuse.example</groupId> <artifactId>camel-api-example-api</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>", "<classifier>javadoc</classifier>", "<scope>provided</scope>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <dependencies> <!-- Component API javadoc in provided scope to read API signatures --> <dependency> <groupId>org.jboss.fuse.example</groupId> <artifactId>camel-api-example-api</artifactId> <version>1.0-SNAPSHOT</version> <classifier>javadoc</classifier> <scope>provided</scope> </dependency> </dependencies> </project>", "public String sayHi(); public String greetMe(String name); public String greetUs(String name1, String name2);", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <build> <defaultGoal>install</defaultGoal> <plugins> <!-- generate Component source and test source --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-api-component-maven-plugin</artifactId> <executions> <execution> <id>generate-test-component-classes</id> <goals> <goal>fromApis</goal> </goals> <configuration> <apis> <api> <apiName>hello-file</apiName> <proxyClass>org.jboss.fuse.example.api.ExampleFileHello</proxyClass> <fromSignatureFile>signatures/file-sig-api.txt</fromSignatureFile> </api> <api> <apiName>hello-javadoc</apiName> <proxyClass>org.jboss.fuse.example.api.ExampleJavadocHello</proxyClass> <fromJavadoc/> </api> </apis> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>", "// Java package org.jboss.fuse.example; import org.apache.camel.CamelContext; import org.apache.camel.Endpoint; import org.apache.camel.spi.UriEndpoint; import org.apache.camel.util.component.AbstractApiComponent; import org.jboss.fuse.example.internal.ExampleApiCollection; import org.jboss.fuse.example.internal.ExampleApiName; /** * Represents the component that manages {@link ExampleEndpoint}. */ @UriEndpoint(scheme = \"example\", consumerClass = ExampleConsumer.class, consumerPrefix = \"consumer\") public class ExampleComponent extends AbstractApiComponent<ExampleApiName, ExampleConfiguration, ExampleApiCollection> { public ExampleComponent() { super(ExampleEndpoint.class, ExampleApiName.class, ExampleApiCollection.getCollection()); } public ExampleComponent(CamelContext context) { super(context, ExampleEndpoint.class, ExampleApiName.class, ExampleApiCollection.getCollection()); } @Override protected ExampleApiName getApiName(String apiNameStr) throws IllegalArgumentException { return ExampleApiName.fromValue(apiNameStr); } @Override protected Endpoint createEndpoint(String uri, String methodName, ExampleApiName apiName, ExampleConfiguration endpointConfiguration) { return new ExampleEndpoint(uri, this, apiName, methodName, endpointConfiguration); } }", "// Java package org.jboss.fuse.example; import java.util.Map; import org.apache.camel.Consumer; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.spi.UriEndpoint; import org.apache.camel.util.component.AbstractApiEndpoint; import org.apache.camel.util.component.ApiMethod; import org.apache.camel.util.component.ApiMethodPropertiesHelper; import org.jboss.fuse.example.api.ExampleFileHello; import org.jboss.fuse.example.api.ExampleJavadocHello; import org.jboss.fuse.example.internal.ExampleApiCollection; import org.jboss.fuse.example.internal.ExampleApiName; import org.jboss.fuse.example.internal.ExampleConstants; import org.jboss.fuse.example.internal.ExamplePropertiesHelper; /** * Represents a Example endpoint. */ @UriEndpoint(scheme = \"example\", consumerClass = ExampleConsumer.class, consumerPrefix = \"consumer\") public class ExampleEndpoint extends AbstractApiEndpoint<ExampleApiName, ExampleConfiguration> { // TODO create and manage API proxy private Object apiProxy; public ExampleEndpoint(String uri, ExampleComponent component, ExampleApiName apiName, String methodName, ExampleConfiguration endpointConfiguration) { super(uri, component, apiName, methodName, ExampleApiCollection.getCollection().getHelper(apiName), endpointConfiguration); } public Producer createProducer() throws Exception { return new ExampleProducer(this); } public Consumer createConsumer(Processor processor) throws Exception { // make sure inBody is not set for consumers if (inBody != null) { throw new IllegalArgumentException(\"Option inBody is not supported for consumer endpoint\"); } final ExampleConsumer consumer = new ExampleConsumer(this, processor); // also set consumer.* properties configureConsumer(consumer); return consumer; } @Override protected ApiMethodPropertiesHelper<ExampleConfiguration> getPropertiesHelper() { return ExamplePropertiesHelper.getHelper(); } protected String getThreadProfileName() { return ExampleConstants.THREAD_PROFILE_NAME; } @Override protected void afterConfigureProperties() { // TODO create API proxy, set connection properties, etc. switch (apiName) { case HELLO_FILE: apiProxy = new ExampleFileHello(); break; case HELLO_JAVADOC: apiProxy = new ExampleJavadocHello(); break; default: throw new IllegalArgumentException(\"Invalid API name \" + apiName); } } @Override public Object getApiProxy(ApiMethod method, Map<String, Object> args) { return apiProxy; } }", "// Java package org.jboss.fuse.example; import org.apache.camel.Processor; import org.apache.camel.util.component.AbstractApiConsumer; import org.jboss.fuse.example.internal.ExampleApiName; /** * The Example consumer. */ public class ExampleConsumer extends AbstractApiConsumer<ExampleApiName, ExampleConfiguration> { public ExampleConsumer(ExampleEndpoint endpoint, Processor processor) { super(endpoint, processor); } }", "// Java package org.jboss.fuse.example; import org.apache.camel.util.component.AbstractApiProducer; import org.jboss.fuse.example.internal.ExampleApiName; import org.jboss.fuse.example.internal.ExamplePropertiesHelper; /** * The Example producer. */ public class ExampleProducer extends AbstractApiProducer<ExampleApiName, ExampleConfiguration> { public ExampleProducer(ExampleEndpoint endpoint) { super(endpoint, ExamplePropertiesHelper.getHelper()); } }", "// Java package org.jboss.fuse.example; import org.apache.camel.spi.UriParams; /** * Component configuration for Example component. */ @UriParams public class ExampleConfiguration { // TODO add component configuration properties }", "scheme :// endpoint-prefix / endpoint ? Option1 = Value1 &...& OptionN = ValueN", "<configuration> <apis> <api> <apiName> hello-javadoc </apiName> <proxyClass>org.jboss.fuse.example.api.ExampleJavadocHello</proxyClass> <fromJavadoc/> </api> </apis> </configuration>", "example://hello-javadoc/greetMe?name=Jane%20Doe", "src/main/resources/META-INF/services/org/apache/camel/component/example", "class=org.jboss.fuse.example.ExampleComponent", "// Java private Object apiProxy; @Override protected void afterConfigureProperties() { // TODO create API proxy, set connection properties, etc. switch (apiName) { case HELLO_FILE: apiProxy = new ExampleFileHello(); break; case HELLO_JAVADOC: apiProxy = new ExampleJavadocHello(); break; default: throw new IllegalArgumentException(\"Invalid API name \" + apiName); } }", "@Override public Object getApiProxy(ApiMethod method, Map<String, Object> args) { return apiProxy; }", "// Java @Override protected Object doInvokeMethod(Map<String, Object> args) { try { return super.doInvokeMethod(args); } catch (RuntimeCamelException e) { // TODO - Insert custom error handling here! } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/getstart
Monitoring Ceph with Nagios Guide
Monitoring Ceph with Nagios Guide Red Hat Ceph Storage 4 Monitoring Ceph with Nagios Core. Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_nagios_guide/index
Appendix B. Command options for insights-client
Appendix B. Command options for insights-client You can use the settings in the /etc/insights-client/insights-client.conf configuration file to change how the Insights client operates on your system. B.1. Options for the Insights client configuration file When the configuration file and the CLI have similar options, the CLI option is executed when you enter the insights-client command. When the scheduler runs the client, the configuration file options are executed. Note You must enter the choices exactly as shown. True and False use initial capital letters. The changes initiated by the options take effect either at the scheduled run, or when you enter the insights-client command. The options should be formatted as key=value pairs. Table B.1. insights-client.conf configuration options Option Description ansible_host Use this option if you want a different hostname when running Ansible playbooks. authmethod=CERT Set the authentication method. Valid option is CERT. The default value is CERT. auto_config=True Use this to auto configure with Satellite server. Values can be True (default) or False. NOTE: When auto_config=True (default), the authentication method used is CERT . auto_update=True Automatically update the dynamic configuration. The default is True. Change to False if you do not want to automatically update. base_url=cert-api.access.redhat.com:443/r/insights This is the base URL for the API. cmd_timeout=120 This is for commands run during collection and is measured in seconds. The command processes are terminated when the timeout value is reached. content_redaction_file Use this to omit lines or keywords from files and commands in the core collection. The core collection is a more comprehensive result set. You do not need to change the default configuration. The content_redaction_file option uses the /etc/insights-client/file-content-redaction.yaml file by default. display_name Use this as the display name for registration. The default is to use /etc/hostname . NOTE: This value interacts with the insights-client --display-name command. If you use the CLI to change the display name but a different display name is enabled in the configuration file, the display name reverts to the configuration file value when the scheduler runs the Insights client. http_timeout=120 This is for HTTP calls and is measured in seconds. The command processes terminate when the timeout value is reached. [insights-client] This is a required first line of the configuration file, even if you specify a different location or name for the client configuration file. loglevel=DEBUG Use this to change the log level. Options are: DEBUG, INFO, WARNING, ERROR, and CRITICAL. The default is DEBUG. The default log file location is /var/log/insights-client/insights-client.log . obfuscate=False Use this to obfuscate IPv4 addresses. The default is False . Change to True to enable address obfuscation. obfuscate_hostname=False Use this to obfuscate hostname. You must set obfuscate=True to obfuscate the hostname, which enables IPv4 address obfuscation. You cannot obfuscate only the hostname. proxy Use this for the URL for your proxy. Example: http://user:[email protected]:8080 redaction_file Use this to omit files or commands from the core collection. The core collection is a more comprehensive result set. You do not need to change the default configuration. The redaction_file option uses /etc/insights-client/file-redaction.yaml by default.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights_with_fedramp/assembly-insights-client-cg-config-file
Chapter 10. Using Red Hat subscriptions in builds
Chapter 10. Using Red Hat subscriptions in builds Use the following sections to install Red Hat subscription content within OpenShift Container Platform builds. 10.1. Creating an image stream tag for the Red Hat Universal Base Image To install Red Hat Enterprise Linux (RHEL) packages within a build, you can create an image stream tag to reference the Red Hat Universal Base Image (UBI). To make the UBI available in every project in the cluster, add the image stream tag to the openshift namespace. Otherwise, to make it available in a specific project , add the image stream tag to that project. Image stream tags grant access to the UBI by using the registry.redhat.io credentials that are present in the install pull secret, without exposing the pull secret to other users. This method is more convenient than requiring each developer to install pull secrets with registry.redhat.io credentials in each project. Procedure To create an ImageStreamTag resource in the openshift namespace, so it is available to developers in all projects, enter the following command: USD oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift Tip You can alternatively apply the following YAML to create an ImageStreamTag resource in the openshift namespace: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source To create an ImageStreamTag resource in a single project, enter the following command: USD oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest Tip You can alternatively apply the following YAML to create an ImageStreamTag resource in a single project: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source 10.2. Adding subscription entitlements as a build secret Builds that use Red Hat subscriptions to install content must include the entitlement keys as a build secret. Prerequisites You must have access to Red Hat Enterprise Linux (RHEL) package repositories through your subscription. The entitlement secret to access these repositories is automatically created by the Insights Operator when your cluster is subscribed. You must have access to the cluster as a user with the cluster-admin role or you have permission to access secrets in the openshift-config-managed project. Procedure Copy the entitlement secret from the openshift-config-managed namespace to the namespace of the build by entering the following commands: USD cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \USDkey, \USDvalue := .data }} {{ \USDkey }}: {{ \USDvalue }} {{ end }} EOF USD oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f - Add the etc-pki-entitlement secret as a build volume in the build configuration's Docker strategy: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 10.3. Running builds with Subscription Manager 10.3.1. Docker builds using Subscription Manager Docker strategy builds can use yum or dnf to install additional Red Hat Enterprise Linux (RHEL) packages. Prerequisites The entitlement keys must be added as build strategy volumes. Procedure Use the following as an example Dockerfile to install content with the Subscription Manager: FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \ 2 nss_wrapper \ uid_wrapper -y && \ yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 1 You must include the command to remove the /etc/rhsm-host directory and all its contents in your Dockerfile before executing any yum or dnf commands. 2 Use the Red Hat Package Browser to find the correct repositories for your installed packages. 3 You must restore the /etc/rhsm-host symbolic link to keep your image compatible with other Red Hat container images. 10.4. Running builds with Red Hat Satellite subscriptions 10.4.1. Adding Red Hat Satellite configurations to builds Builds that use Red Hat Satellite to install content must provide appropriate configurations to obtain content from Satellite repositories. Prerequisites You must provide or create a yum -compatible repository configuration file that downloads content from your Satellite instance. Sample repository configuration [test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem Procedure Create a ConfigMap object containing the Satellite repository configuration file by entering the following command: USD oc create configmap yum-repos-d --from-file /path/to/satellite.repo Add the Satellite repository configuration and entitlement key as a build volumes: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 10.4.2. Docker builds using Red Hat Satellite subscriptions Docker strategy builds can use Red Hat Satellite repositories to install subscription content. Prerequisites You have added the entitlement keys and Satellite repository configurations as build volumes. Procedure Use the following example to create a Dockerfile for installing content with Satellite: FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \ 2 nss_wrapper \ uid_wrapper -y && \ yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 1 You must include the command to remove the /etc/rhsm-host directory and all its contents in your Dockerfile before executing any yum or dnf commands. 2 Contact your Satellite system administrator to find the correct repositories for the build's installed packages. 3 You must restore the /etc/rhsm-host symbolic link to keep your image compatible with other Red Hat container images. Additional resources How to use builds with Red Hat Satellite subscriptions and which certificate to use 10.5. Running builds using SharedSecret objects You can use a SharedSecret object to securely access the entitlement keys of a cluster in builds. The SharedSecret object allows you to share and synchronize secrets across namespaces. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see Enabling features using feature gates . You must have permission to perform the following actions: Create build configs and start builds. Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back. Determine if the builder service account available to you in your namespace is allowed to use the given SharedSecret CR instance. In other words, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed. Note If neither of the last two prerequisites in this list are met, establish, or ask someone to establish, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances. Procedure Use oc apply to create a SharedSecret object instance with the cluster's entitlement secret. Important You must have cluster administrator permissions to create SharedSecret objects. Example oc apply -f command with YAML Role object definition USD oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF Create a role to grant the builder service account permission to access the SharedSecret object: Example oc apply -f command USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF Create a RoleBinding object that grants the builder service account permission to access the SharedSecret object by running the following command: Example oc create rolebinding command USD oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder Add the entitlement secret to your BuildConfig object by using a CSI volume mount: Example YAML BuildConfig object definition apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \ 2 nss_wrapper \ uid_wrapper -y && \ yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: "/etc/pki/entitlement" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI 1 You must include the command to remove the /etc/rhsm-host directory and all its contents in the Dockerfile before executing any yum or dnf commands. 2 Use the Red Hat Package Browser to find the correct repositories for your installed packages. 3 You must restore the /etc/rhsm-host symbolic link to keep your image compatible with other Red Hat container images. 4 You must set readOnly to true to mount the shared resource in the build. 5 Reference the name of the SharedSecret object to include it in the build. Start a build from the BuildConfig object and follow the logs using the oc command. USD oc start-build uid-wrapper-rhel9 -n build-namespace -F 10.6. Additional resources Importing simple content access certificates with Insights Operator Enabling features using feature gates Managing image streams Build strategies
[ "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \\USDkey, \\USDvalue := .data }} {{ \\USDkey }}: {{ \\USDvalue }} {{ end }} EOF oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f -", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem", "oc create configmap yum-repos-d --from-file /path/to/satellite.repo", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF", "oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI", "oc start-build uid-wrapper-rhel9 -n build-namespace -F" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_buildconfig/running-entitled-builds
Chapter 7. Creating secure HTTP load balancers
Chapter 7. Creating secure HTTP load balancers In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can create various types of load balancers to manage secure HTTP (HTTPS) network traffic: Section 7.1, "About non-terminated HTTPS load balancers" Section 7.2, "Creating a non-terminated HTTPS load balancer" Section 7.3, "About TLS-terminated HTTPS load balancers" Section 7.4, "Creating a TLS-terminated HTTPS load balancer" Section 7.5, "Creating a TLS-terminated HTTPS load balancer with SNI" Section 7.6, "Creating a TLS-terminated load balancer with an HTTP/2 listener" Section 7.7, "Creating HTTP and TLS-terminated HTTPS load balancing on the same IP and back-end" 7.1. About non-terminated HTTPS load balancers A non-terminated HTTPS load balancer acts effectively like a generic TCP load balancer: the load balancer forwards the raw TCP traffic from the web client to the back-end servers where the HTTPS connection is terminated with the web clients. While non-terminated HTTPS load balancers do not support advanced load balancer features like Layer 7 functionality, they do lower load balancer resource utilization by managing the certificates and keys themselves. 7.2. Creating a non-terminated HTTPS load balancer If your application requires HTTPS traffic to terminate on the back-end member servers, typically called HTTPS pass through , you can use the HTTPS protocol for your Red Hat OpenStack Services on OpenShift (RHOSO) load balancer listeners in a RHOSO environment. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. A shared external (public) subnet that you can reach from the internet. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Create a load balancer ( lb1 ) on a public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a listener ( listener1 ) on a port ( 443 ). Example Create the listener default pool ( pool1 ). Example The command in this example creates an HTTPS pool that uses a private subnet containing back-end servers that host HTTPS applications configured with a TLS-encrypted web application on TCP port 443: Create a health monitor ( healthmon1 ) on the pool ( pool1 ) of type ( TLS-HELLO ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Example A working member ( member1 ) has an ONLINE value for its operating_status . Sample output 7.3. About TLS-terminated HTTPS load balancers When a TLS-terminated HTTPS load balancer is implemented in a Red Hat OpenStack Services on OpenShift (RHOSO) environment, web clients communicate with the load balancer over Transport Layer Security (TLS) protocols. The load balancer terminates the TLS session and forwards the decrypted requests to the back-end servers. When you terminate the TLS session on the load balancer, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. 7.4. Creating a TLS-terminated HTTPS load balancer When you use TLS-terminated HTTPS load balancers, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. In Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. A shared external (public) subnet that you can reach from the internet. TLS public-key cryptography is configured with the following characteristics: A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example, www.example.com . The certificate, key, and intermediate certificate chain reside in separate files in the current directory. The key and certificate are PEM-encoded. The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together. You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide. Procedure Combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Note The following procedure does not work if you password protect the PKCS12 file. Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Use the Key Manager service to create a secret resource ( tls_secret1 ) for the PKCS12 file. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Example Create a TERMINATED_HTTPS listener ( listener1 ), and reference the secret resource as the default TLS container for the listener. Example Create a pool ( pool1 ) and make it the default pool for the listener. Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80: Create a health monitor ( healthmon1 ) of type ( HTTP ) on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add the non-secure HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Example A working member ( member1 ) has an ONLINE value for its operating_status : Sample output 7.5. Creating a TLS-terminated HTTPS load balancer with SNI For TLS-terminated HTTPS load balancers that employ Server Name Indication (SNI) technology, a single listener can contain multiple TLS certificates and enable the load balancer to know which certificate to present when it uses a shared IP. In Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. A shared external (public) subnet that you can reach from the internet. TLS public-key cryptography is configured with the following characteristics: Multiple TLS certificates, keys, and intermediate certificate chains have been obtained from an external certificate authority (CA) for the DNS names assigned to the load balancer VIP address, for example, www.example.com and www2.example.com . The keys and certificates are PEM-encoded. You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide. Procedure For each of the TLS certificates in the SNI list, combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). In this example, you create two PKCS12 files ( server.p12 and server2.p12 ) one for each certificate ( www.example.com and www2.example.com ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Use the Key Manager service to create secret resources ( tls_secret1 and tls_secret2 ) for the PKCS12 file. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Example Create a TERMINATED_HTTPS listener ( listener1 ), and use SNI to reference both the secret resources. (Reference tls_secret1 as the default TLS container for the listener.) Example Create a pool ( pool1 ) and make it the default pool for the listener. Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80: Create a health monitor ( healthmon1 ) of type ( HTTP ) on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add the non-secure HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Example Sample output A working member ( member1 ) has an ONLINE value for its operating_status : 7.6. Creating a TLS-terminated load balancer with an HTTP/2 listener When you use TLS-terminated HTTPS load balancers, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. With the addition of an HTTP/2 listener, you can leverage the HTTP/2 protocol to improve performance by loading pages faster. Load balancers negotiate HTTP/2 with clients by using the Application-Layer Protocol Negotiation (ALPN) TLS extension. The Load-balancing service (octavia) supports end-to-end HTTP/2 traffic, which means that the HTTP2 traffic is not translated by HAProxy from the point where the request reaches the listener until the response returns from the load balancer. To achieve end-to-end HTTP/2 traffic, you must have an HTTP pool with back-end re-encryption: pool members that are listening on a secure port and web applications that are configured for HTTPS traffic. You can send HTTP/2 traffic to an HTTP pool without back-end re-encryption. In this situation, HAProxy translates the traffic before it reaches the pool, and the response is translated back to HTTP/2 before it returns from the load balancer. Red Hat recommends that you create a health monitor to ensure that your back-end members remain available in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Note Currently, the Load-balancing service does not support health monitoring for TLS-terminated load balancers that use HTTP/2 listeners. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. TLS public-key cryptography is configured with the following characteristics: A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example, www.example.com . The certificate, key, and intermediate certificate chain reside in separate files in the current directory. The key and certificate are PEM-encoded. The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together. You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide. Procedure Combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Important When you create the PKCS12 file, do not password protect the file. Example In this example, the PKCS12 file is created without a password: Use the Key Manager service to create a secret resource ( tls_secret1 ) for the PKCS12 file. Example Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Example Create a TERMINATED_HTTPS listener ( listener1 ) and do the following: reference the secret resource ( tls_secret1 ) as the default TLS container for the listener. set the ALPN protocol ( h2 ). set the fallback protocol if the client does not support HTTP/2 ( http/1.1 ). Example Create a pool ( pool1 ) and make it the default pool for the listener. Example The command in this example creates an HTTP pool containing back-end servers that host HTTP applications configured with a web application on TCP port 80: Create a health monitor ( healthmon1 ) of type ( TCP ) on the pool ( pool1 ) that connects to the back-end servers. Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add the HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Example Sample output A working member ( member1 ) has an ONLINE value for its operating_status : 7.7. Creating HTTP and TLS-terminated HTTPS load balancing on the same IP and back-end You can configure a non-secure listener and a TLS-terminated HTTPS listener on the same load balancer and the same IP address when you want to respond to web clients with the exact same content, regardless if the client is connected with a secure or non-secure HTTP protocol. In Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. A shared external (public) subnet that you can reach from the internet. TLS public-key cryptography is configured with the following characteristics: A TLS certificate, key, and optional intermediate certificate chain have been obtained from an external certificate authority (CA) for the DNS name assigned to the load balancer VIP address (for example, www.example.com). The certificate, key, and intermediate certificate chain reside in separate files in the current directory. The key and certificate are PEM-encoded. The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together. You have configured the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide. The non-secure HTTP listener is configured with the same pool as the HTTPS TLS-terminated load balancer. Procedure Combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Use the Key Manager service to create a secret resource ( tls_secret1 ) for the PKCS12 file. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Example Create a TERMINATED_HTTPS listener ( listener1 ), and reference the secret resource as the default TLS container for the listener. Example Create a pool ( pool1 ) and make it the default pool for the listener. Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80: Create a health monitor ( healthmon1 ) of type ( HTTP ) on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add the non-secure HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Create a non-secure, HTTP listener ( listener2 ), and make its default pool, the same as the secure listener. Example Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. Example Sample output A working member ( member1 ) has an ONLINE value for its operating_status :
[ "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait", "openstack loadbalancer listener create --name listener1 --protocol HTTPS --protocol-port 443 lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTPS", "openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type TLS-HELLO --url-path / pool1", "openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 member1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 443 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "dnf list installed python-openstackclient", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\"", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait", "openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --default-tls-container= USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 member1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "dnf list installed python-openstackclient", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12 openssl pkcs12 -export -inkey server2.key -in server2.crt -certfile ca-chain2.crt -passout pass: -out server2.p12", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\" openstack secret store --name='tls_secret2' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server2.p12)\"", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait", "openstack loadbalancer listener create --name listener1 --protocol-port 443 --protocol TERMINATED_HTTPS --default-tls-container= USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') --sni-container-refs USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') USD(openstack secret list | awk '/ tls_secret2 / {print USD2}') -- lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 member1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "dnf list installed python-openstackclient", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\"", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait", "openstack loadbalancer listener create --name listener1 --protocol-port 443 --protocol TERMINATED_HTTPS --alpn-protocol h2 --alpn-protocol http/1.1 --default-tls-container= USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type TCP pool1", "openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack loadbalancer status show lb1", "{ \"loadbalancer\": { \"id\": \"936dad29-4c3f-4f24-84a8-c0e6f10ed810\", \"name\": \"lb1\", \"operating_status\": \"ONLINE\", \"provisioning_status\": \"ACTIVE\", \"listeners\": [ { \"id\": \"708b82c6-8a6b-4ec1-ae53-e619769821d4\", \"name\": \"listener1\", \"operating_status\": \"ONLINE\", \"provisioning_status\": \"ACTIVE\", \"pools\": [ { \"id\": \"5ad7c678-23af-4422-8edb-ac3880bd888b\", \"name\": \"pool1\", \"provisioning_status\": \"ACTIVE\", \"operating_status\": \"ONLINE\", \"health_monitor\": { \"id\": \"4ad786ef-6661-4e31-a325-eca07b2b3dd1\", \"name\": \"healthmon1\", \"type\": \"TCP\", \"provisioning_status\": \"ACTIVE\", \"operating_status\": \"ONLINE\" }, \"members\": [ { \"id\": \"facca0d3-61a7-4b46-85e8-da6994883647\", \"name\": \"member1\", \"operating_status\": \"ONLINE\", \"provisioning_status\": \"ACTIVE\", \"address\": \"192.0.2.10\", \"protocol_port\": 80 }, { \"id\": \"2b0d9e0b-8e0c-48b8-aa57-90b2fde2eae2\", \"name\": \"member2\", \"operating_status\": \"ONLINE\", \"provisioning_status\": \"ACTIVE\", \"address\": \"192.0.2.11\", \"protocol_port\": 80 }", "openstack loadbalancer member show pool1 member1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-08-16T20:08:01 | | id | facca0d3-61a7-4b46-85e8-da6994883647 | | name | member1 | | operating_status | ONLINE | | project_id | 9b29c91f67314bd09eda9018616851cf | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 3b459c95-64d2-4cfa-b348-01aacc4b3fa9 | | updated_at | 2024-08-16T20:25:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | | tags | | +---------------------+--------------------------------------+", "dnf list installed python-openstackclient", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\"", "openstack loadbalancer create --name lb1 --vip-subnet-id external_subnet --wait", "openstack loadbalancer listener create --name listener1 --protocol-port 443 --protocol TERMINATED_HTTPS --default-tls-container= USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1", "openstack loadbalancer listener create --name listener2 --protocol-port 80 --protocol HTTP --default-pool pool1 lb1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 member1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/create-secure-lbs_rhoso-lbaas
Chapter 3. Testing requirements for Red Hat OpenStack Platform Application
Chapter 3. Testing requirements for Red Hat OpenStack Platform Application The RHOSP Application Testing Requirements will be required and provided by Red Hat in a test plan for each certification. The following tests are explained in Certification tests of this guide. System Report Test Supportability Test Director Test VNF Configuration Testing Report Test (for VNF only) You are expected to perform System Report, Supportability, and Director test for a regular RHOSP application. For VNF certification along with these three test you also need to perform the VNF Testing Configuration report test.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_policy_guide/assembly-rhosp-application-testing-requirements_rhosp-vnf-certification-prerequisites
9.15.3. Create Software RAID
9.15.3. Create Software RAID Redundant arrays of independent disks (RAIDs) are constructed from multiple storage devices that are arranged to provide increased performance and - in some configurations - greater fault tolerance. Refer to the Red Hat Enterprise Linux Storage Administration Guide for a description of different kinds of RAIDs. To make a RAID device, you must first create software RAID partitions. Once you have created two or more software RAID partitions, select RAID to join the software RAID partitions into a RAID device. RAID Partition Choose this option to configure a partition for software RAID. This option is the only choice available if your disk contains no software RAID partitions. This is the same dialog that appears when you add a standard partition - refer to Section 9.15.2, "Adding Partitions" for a description of the available options. Note, however, that File System Type must be set to software RAID Figure 9.42. Create a software RAID partition RAID Device Choose this option to construct a RAID device from two or more existing software RAID partitions. This option is available if two or more software RAID partitions have been configured. Figure 9.43. Create a RAID device Select the file system type as for a standard partition. Anaconda automatically suggests a name for the RAID device, but you can manually select names from md0 to md15 . Click the checkboxes beside individual storage devices to include or remove them from this RAID. The RAID Level corresponds to a particular type of RAID. Choose from the following options: RAID 0 - distributes data across multiple storage devices. Level 0 RAIDs offer increased performance over standard partitions, and can be used to pool the storage of multiple devices into one large virtual device. Note that Level 0 RAIDS offer no redundancy and that the failure of one device in the array destroys the entire array. RAID 0 requires at least two RAID partitions. RAID 1 - mirrors the data on one storage device onto one or more other storage devices. Additional devices in the array provide increasing levels of redundancy. RAID 1 requires at least two RAID partitions. RAID 4 - distributes data across multiple storage devices, but uses one device in the array to store parity information that safeguards the array in case any device within the array fails. Because all parity information is stored on the one device, access to this device creates a bottleneck in the performance of the array. RAID 4 requires at least three RAID partitions. RAID 5 - distributes data and parity information across multiple storage devices. Level 5 RAIDs therefore offer the performance advantages of distributing data across multiple devices, but do not share the performance bottleneck of level 4 RAIDs because the parity information is also distributed through the array. RAID 5 requires at least three RAID partitions. RAID 6 - level 6 RAIDs are similar to level 5 RAIDs, but instead of storing only one set of parity data, they store two sets. RAID 6 requires at least four RAID partitions. RAID 10 - level 10 RAIDs are nested RAIDs or hybrid RAIDs . Level 10 RAIDs are constructed by distributing data over mirrored sets of storage devices. For example, a level 10 RAID constructed from four RAID partitions consists of two pairs of partitions in which one partition mirrors the other. Data is then distributed across both pairs of storage devices, as in a level 0 RAID. RAID 10 requires at least four RAID partitions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/create_software_raid-x86
Chapter 12. 4.9 Release Notes
Chapter 12. 4.9 Release Notes 12.1. New Features The following major enhancements have been introduced in Red Hat Update Infrastructure 4.9. Auto update packages on CDS With this update when a CDS is reinstalled, by default, a dnf update will be executed on the CDS host, resulting in RHEL RPMs being updated on the CDS server. Nginx Update With this update the version of nginx has been updated on the RHUA and CDS hosts. The number of versions preserved can now be customized Previously all repository versions were retained as newer versions were added taking up disk space and, in the worst case, the inability to delete a repository. Starting in RHUI version 4.7 only the latest five versions were preserved. With this update the new sub-command rhui-manager repo set_retain_versions has been added. This will allow users the ability to limit the number of versions on the repos that were originally created with no limit. Pulp Database migration to PostgreSQL v.15 With this version the user has the ability to optionally migrate the database used by Pulp to PostgreSQL v.15 . This can be done with the new rhui-installer flag --postgresql-version . For example: rhui-installer --rerun --postgresql-version 15 No longer used, orphan data, can now be cleaned up Previously RPM packages, repodata files, and other related files were kept on the disk even if they were no longer part of a repository. With this update the new sub-command rhui-manager repo orphan_cleanup has been added that can be used to clean up some of this no longer used orphaned data. 12.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.9 that have a significant impact on users. Avoid synchyronizations hanging if a network outage happens In the past if a network outage happened during a pulp synchronization of a repository the synchronization could hang indefinitely. With this update a network timeout will be leveraged in the underlying code to prevent this condition.
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-9-release-notes_release-notes
Chapter 4. About Kafka Connect
Chapter 4. About Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems. The other system is typically an external data source or target, such as a database. Kafka Connect uses a plugin architecture. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems. AMQ Streams operates Kafka Connect in distributed mode , distributing data streaming tasks across one or more worker pods. A Kafka Connect cluster comprises a group of worker pods. Each connector is instantiated on a single worker. Each connector comprises one or more tasks that are distributed across the group of workers. Distribution across workers permits highly scalable pipelines. Workers convert data from one format into another format that's suitable for the source or target system. Depending on the configuration of the connector instance, workers might also apply transforms (also known as Single Message Transforms, or SMTs). Transforms adjust messages, such as filtering certain data, before they are converted. Kafka Connect has some built-in transforms, but other transformations can be provided by plugins if necessary. 4.1. How Kafka Connect streams data Kafka Connect uses connector instances to integrate with other systems to stream data. Kafka Connect loads existing connector instances on start up and distributes data streaming tasks and connector configuration across worker pods. Workers run the tasks for the connector instances. Each worker runs as a separate pod to make the Kafka Connect cluster more fault tolerant. If there are more tasks than workers, workers are assigned multiple tasks. If a worker fails, its tasks are automatically assigned to active workers in the Kafka Connect cluster. The main Kafka Connect components used in streaming data are as follows: Connectors to create tasks Tasks to move data Workers to run tasks Transforms to manipulate data Converters to convert data 4.1.1. Connectors Connectors can be one of the following type: Source connectors that push data into Kafka Sink connectors that extract data out of Kafka Plugins provide the implementation for Kafka Connect to run connector instances. Connector instances create the tasks required to transfer data in and out of Kafka. The Kafka Connect runtime orchestrates the tasks to split the work required between the worker pods. MirrorMaker 2.0 also uses the Kafka Connect framework. In this case, the external data system is another Kafka cluster. Specialized connectors for MirrorMaker 2.0 manage data replication between source and target Kafka clusters. Note In addition to the MirrorMaker 2.0 connectors, Kafka provides two built-in connectors as examples: FileStreamSourceConnector streams data from a file on the worker's filesystem to Kafka, reading the input file and sending each line to a given Kafka topic. FileStreamSinkConnector streams data from Kafka to the worker's filesystem, reading messages from a Kafka topic and writing a line for each in an output file. The following source connector diagram shows the process flow for a source connector that streams records from an external data system. A Kafka Connect cluster might operate source and sink connectors at the same time. Workers are running in distributed mode in the cluster. Workers can run one or more tasks for more than one connector instance. Source connector streaming data to Kafka A plugin provides the implementation artifacts for the source connector A single worker initiates the source connector instance The source connector creates the tasks to stream data Tasks run in parallel to poll the external data system and return records Transforms adjust the records, such as filtering or relabelling them Converters put the records into a format suitable for Kafka The source connector is managed using KafkaConnectors or the Kafka Connect API The following sink connector diagram shows the process flow when streaming data from Kafka to an external data system. Sink connector streaming data from Kafka A plugin provides the implementation artifacts for the sink connector A single worker initiates the sink connector instance The sink connector creates the tasks to stream data Tasks run in parallel to poll Kafka and return records Converters put the records into a format suitable for the external data system Transforms adjust the records, such as filtering or relabelling them The sink connector is managed using KafkaConnectors or the Kafka Connect API 4.1.2. Tasks Data transfer orchestrated by the Kafka Connect runtime is split into tasks that run in parallel. A task is started using the configuration supplied by a connector instance. Kafka Connect distributes the task configurations to workers, which instantiate and execute tasks. A source connector task polls the external data system and returns a list of records that a worker sends to the Kafka brokers. A sink connector task receives Kafka records from a worker for writing to the external data system. For sink connectors, the number of tasks created relates to the number of partitions being consumed. For source connectors, how the source data is partitioned is defined by the connector. You can control the maximum number of tasks that can run in parallel by setting tasksMax in the connector configuration. The connector might create fewer tasks than the maximum setting. For example, the connector might create fewer tasks if it's not possible to split the source data into that many partitions. Note In the context of Kafka Connect, a partition can mean a topic partition or a shard of data in an external system. 4.1.3. Workers Workers employ the connector configuration deployed to the Kafka Connect cluster. The configuration is stored in an internal Kafka topic used by Kafka Connect. Workers also run connectors and their tasks. A Kafka Connect cluster contains a group of workers with the same group.id . The ID identifies the cluster within Kafka. The ID is assigned in the worker configuration through the KafkaConnect resource. Worker configuration also specifies the names of internal Kafka Connect topics. The topics store connector configuration, offset, and status information. The group ID and names of these topics must also be unique to the Kafka Connect cluster. Workers are assigned one or more connector instances and tasks. The distributed approach to deploying Kafka Connect is fault tolerant and scalable. If a worker pod fails, the tasks it was running are reassigned to active workers. You can add to a group of worker pods through configuration of the replicas property in the KafkaConnect resource. 4.1.4. Transforms Kafka Connect translates and transforms external data. Single-message transforms change messages into a format suitable for the target destination. For example, a transform might insert or rename a field. Transforms can also filter and route data. Plugins contain the implementation required for workers to perform one or more transformations. Source connectors apply transforms before converting data into a format supported by Kafka. Sink connectors apply transforms after converting data into a format suitable for an external data system. A transform comprises a set of Java class files packaged in a JAR file for inclusion in a connector plugin. Kafka Connect provides a set of standard transforms, but you can also create your own. 4.1.5. Converters When a worker receives data, it converts the data into an appropriate format using a converter. You specify converters for workers in the worker config in the KafkaConnect resource. Kafka Connect can convert data to and from formats supported by Kafka, such as JSON or Avro. It also supports schemas for structuring data. If you are not converting data into a structured format, you don't need to enable schemas. Note You can also specify converters for specific connectors to override the general Kafka Connect worker configuration that applies to all workers. Additional resources Apache Kafka documentation Kafka Connect configuration of workers Synchronizing data between Kafka clusters using MirrorMaker 2.0
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/amq_streams_on_openshift_overview/kafka-connect-components_str
13.2. Transferring Data Using RoCE
13.2. Transferring Data Using RoCE RDMA over Converged Ethernet (RoCE) is a network protocol that enables remote direct memory access (RDMA) over an Ethernet network. There are two RoCE versions, RoCE v1 and RoCE v2, depending on the network adapter used. RoCE v1 The RoCE v1 protocol is an Ethernet link layer protocol with ethertype 0x8915 that enables communication between any two hosts in the same Ethernet broadcast domain. RoCE v1 is the default version for RDMA Connection Manager (RDMA_CM) when using the ConnectX-3 network adapter. RoCE v2 The RoCE v2 protocol exists on top of either the UDP over IPv4 or the UDP over IPv6 protocol. The UDP destination port number 4791 has been reserved for RoCE v2. Since Red Hat Enterprise Linux 7.5, RoCE v2 is the default version for RDMA_CM when using the ConnectX-3 Pro, ConnectX-4, ConnectX-4 Lx and ConnectX-5 network adapters. Hardware supports both RoCE v1 and RoCE v2 . RDMA Connection Manager (RDMA_CM) is used to set up a reliable connection between a client and a server for transferring data. RDMA_CM provides an RDMA transport-neutral interface for establishing connections. The communication is over a specific RDMA device, and data transfers are message-based. Prerequisites An RDMA_CM session requires one of the following: Both client and server support the same RoCE mode. A client supports RoCE v1 and a server RoCE v2. Since a client determines the mode of the connection, the following cases are possible: A successful connection: If a client is in RoCE v1 or in RoCE v2 mode depending on the network card and the driver used, the corresponding server must have the same version to create a connection. Also, the connection is successful if a client is in RoCE v1 and a server in RoCE v2 mode. A failed connection: If a client is in RoCE v2 and the corresponding server is in RoCE v1, no connection can be established. In this case, update the driver or the network adapter of the corresponding server, see Section 13.2, "Transferring Data Using RoCE" Table 13.1. RoCE Version Defaults Using RDMA_CM Client Server Default setting RoCE v1 RoCE v1 Connection RoCE v1 RoCE v2 Connection RoCE v2 RoCE v2 Connection RoCE v2 RoCE v1 No connection That RoCE v2 on the client and RoCE v1 on the server are not compatible. To resolve this issue, force both the server and client-side environment to communicate over RoCE v1. This means to force hardware that supports RoCE v2 to use RoCE v1: Procedure 13.1. Changing the Default RoCE Mode When the Hardware Is Already Running in Roce v2 Change into the /sys/kernel/config/rdma_cm directory to et the RoCE mode: Enter the ibstat command with an Ethernet network device to display the status. For example, for mlx5_0 : Create a directory for the mlx5_0 device: Display the RoCE mode in the default_roce_mode file in the tree format: Change the default RoCE mode: View the changes:
[ "~]# cd /sys/kernel/config/rdma_cm", "~]USD ibstat mlx5_0 CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.17.1010 Hardware version: 0 Node GUID: 0x248a0703004bf0a4 System image GUID: 0x248a0703004bf0a4 Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0x268a07fffe4bf0a4 Link layer: Ethernet", "~]# mkdir mlx5_0", "~]# cd mlx5_0", "~]USD tree └── ports └── 1 ├── default_roce_mode └── default_roce_tos", "~]USD cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode RoCE v2", "~]# echo \"RoCE v1\" > /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode", "~]USD cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode RoCE v1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-tranferring_data_using_roce
7.5. Defining Audit Rules
7.5. Defining Audit Rules The Audit system operates on a set of rules that define what is to be captured in the log files. The following types of Audit rules can be specified: Control rules Allow the Audit system's behavior and some of its configuration to be modified. File system rules Also known as file watches, allow the auditing of access to a particular file or a directory. System call rules Allow logging of system calls that any specified program makes. Audit rules can be set: on the command line using the auditctl utility. Note that these rules are not persistent across reboots. For details, see Section 7.5.1, "Defining Audit Rules with auditctl " in the /etc/audit/audit.rules file. For details, see Section 7.5.3, "Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File" 7.5.1. Defining Audit Rules with auditctl The auditctl command allows you to control the basic functionality of the Audit system and to define rules that decide which Audit events are logged. Note All commands which interact with the Audit service and the Audit log files require root privileges. Ensure you execute these commands as the root user. Additionally, the CAP_AUDIT_CONTROL capability is required to set up audit services and the CAP_AUDIT_WRITE capabilityis required to log user messages. Defining Control Rules The following are some of the control rules that allow you to modify the behavior of the Audit system: -b sets the maximum amount of existing Audit buffers in the kernel, for example: -f sets the action that is performed when a critical error is detected, for example: The above configuration triggers a kernel panic in case of a critical error. -e enables and disables the Audit system or locks its configuration, for example: The above command locks the Audit configuration. -r sets the rate of generated messages per second, for example: The above configuration sets no rate limit on generated messages. -s reports the status of the Audit system, for example: -l lists all currently loaded Audit rules, for example: -D deletes all currently loaded Audit rules, for example: Defining File System Rules To define a file system rule, use the following syntax: where: path_to_file is the file or directory that is audited. permissions are the permissions that are logged: r - read access to a file or a directory. w - write access to a file or a directory. x - execute access to a file or a directory. a - change in the file's or directory's attribute. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.1. File System Rules To define a rule that logs all write access to, and every attribute change of, the /etc/passwd file, execute the following command: Note that the string following the -k option is arbitrary. To define a rule that logs all write access to, and every attribute change of, all the files in the /etc/selinux/ directory, execute the following command: To define a rule that logs the execution of the /sbin/insmod command, which inserts a module into the Linux kernel, execute the following command: Defining System Call Rules To define a system call rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. field = value specifies additional options that further modify the rule to match events based on a specified architecture, group ID, process ID, and others. For a full listing of all available field types and their values, see the auditctl (8) man page. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.2. System Call Rules To define a rule that creates a log entry every time the adjtimex or settimeofday system calls are used by a program, and the system uses the 64-bit architecture, use the following command: To define a rule that creates a log entry every time a file is deleted or renamed by a system user whose ID is 1000 or larger, use the following command: Note that the -F auid!=4294967295 option is used to exclude users whose login UID is not set. It is also possible to define a file system rule using the system call rule syntax. The following command creates a rule for system calls that is analogous to the -w /etc/shadow -p wa file system rule: 7.5.2. Defining Executable File Rules To define an executable file rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. path_to_executable_file is the absolute path to the executable file that is audited. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.3. Executable File Rules To define a rule that logs all execution of the /bin/id program, execute the following command: 7.5.3. Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File To define Audit rules that are persistent across reboots, you must either directly include them in the /etc/audit/audit.rules file or use the augenrules program that reads rules located in the /etc/audit/rules.d/ directory. The /etc/audit/audit.rules file uses the same auditctl command line syntax to specify the rules. Empty lines and text following a hash sign ( # ) are ignored. The auditctl command can also be used to read rules from a specified file using the -R option, for example: Defining Control Rules A file can contain only the following control rules that modify the behavior of the Audit system: -b , -D , -e , -f , -r , --loginuid-immutable , and --backlog_wait_time . For more information on these options, see the section called "Defining Control Rules" . Example 7.4. Control Rules in audit.rules Defining File System and System Call Rules File system and system call rules are defined using the auditctl syntax. The examples in Section 7.5.1, "Defining Audit Rules with auditctl " can be represented with the following rules file: Example 7.5. File System and System Call Rules in audit.rules Preconfigured Rules Files In the /usr/share/doc/audit/rules/ directory, the audit package provides a set of pre-configured rules files according to various certification standards: 30-nispom.rules - Audit rule configuration that meets the requirements specified in the Information System Security chapter of the National Industrial Security Program Operating Manual. 30-pci-dss-v31.rules - Audit rule configuration that meets the requirements set by Payment Card Industry Data Security Standard (PCI DSS) v3.1. 30-stig.rules - Audit rule configuration that meets the requirements set by Security Technical Implementation Guides (STIG). To use these configuration files, create a backup of your original /etc/audit/audit.rules file and copy the configuration file of your choice over the /etc/audit/audit.rules file: Note The Audit rules have a numbering scheme that allows them to be ordered. To learn more about the naming scheme, see the /usr/share/doc/audit/rules/README-rules file. Using augenrules to Define Persistent Rules The augenrules script reads rules located in the /etc/audit/rules.d/ directory and compiles them into an audit.rules file. This script processes all files that ends in .rules in a specific order based on their natural sort order. The files in this directory are organized into groups with following meanings: 10 - Kernel and auditctl configuration 20 - Rules that could match general rules but you want a different match 30 - Main rules 40 - Optional rules 50 - Server-specific rules 70 - System local rules 90 - Finalize (immutable) The rules are not meant to be used all at once. They are pieces of a policy that should be thought out and individual files copied to /etc/audit/rules.d/ . For example, to set a system up in the STIG configuration, copy rules 10-base-config, 30-stig, 31-privileged, and 99-finalize. Once you have the rules in the /etc/audit/rules.d/ directory, load them by running the augenrules script with the --load directive: For more information on the Audit rules and the augenrules script, see the audit.rules(8) and augenrules(8) man pages.
[ "~]# auditctl -b 8192", "~]# auditctl -f 2", "~]# auditctl -e 2", "~]# auditctl -r 0", "~]# auditctl -s AUDIT_STATUS: enabled=1 flag=2 pid=0 rate_limit=0 backlog_limit=8192 lost=259 backlog=0", "~]# auditctl -l -w /etc/passwd -p wa -k passwd_changes -w /etc/selinux -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion ...", "~]# auditctl -D No rules", "auditctl -w path_to_file -p permissions -k key_name", "~]# auditctl -w /etc/passwd -p wa -k passwd_changes", "~]# auditctl -w /etc/selinux/ -p wa -k selinux_changes", "~]# auditctl -w /sbin/insmod -p x -k module_insertion", "auditctl -a action , filter -S system_call -F field = value -k key_name", "~]# auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change", "~]# auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete", "~]# auditctl -a always,exit -F path=/etc/shadow -F perm=wa", "auditctl -a action , filter [ -F arch=cpu -S system_call ] -F exe= path_to_executable_file -k key_name", "~]# auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id", "~]# auditctl -R /usr/share/doc/audit/rules/30-stig.rules", "Delete all previous rules -D Set buffer size -b 8192 Make the configuration immutable -- reboot is required to change audit rules -e 2 Panic when a failure occurs -f 2 Generate at most 100 audit messages per second -r 100 Make login UID immutable once it is set (may break containers) --loginuid-immutable 1", "-w /etc/passwd -p wa -k passwd_changes -w /etc/selinux/ -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete", "~]# cp /etc/audit/audit.rules /etc/audit/audit.rules_backup ~]# cp /usr/share/doc/audit/rules/30-stig.rules /etc/audit/audit.rules", "~]# augenrules --load augenrules --load No rules enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 0 enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-defining_audit_rules_and_controls
Chapter 4. Integrating with Slack
Chapter 4. Integrating with Slack If you are using Slack, you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Slack. The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with Slack: Create a new Slack app, enable incoming webhooks, and get a webhook URL. Use the webhook URL to integrate Slack with Red Hat Advanced Cluster Security for Kubernetes. Identify policies for which you want to send notifications, and update the notification settings for those policies. 4.1. Configuring Slack Start by creating a new Slack app, and get the webhook URL. Prerequisites You need an administrator account or a user account with permissions to create webhooks. Procedure Create a new Slack app: Note If you want to use an existing Slack app, go to https://api.slack.com/apps and select an app. Go to https://api.slack.com/apps/new . Enter the App Name and choose a Development Slack Workspace to install your app. Click Create App . On the settings page, Basic Information section, select Incoming Webhooks (under Add features and functionality ). Turn on the Activate Incoming Webhooks toggle. Select Add New Webhook to Workspace . Choose a channel that the app will post to, and then select Authorize . The page refreshes and you are sent back to your app settings page. Copy the webhook URL located in the Webhook URLs for Your Workspace section. For more information, see the Slack documentation topic, Getting started with Incoming Webhooks . 4.1.1. Sending alerts to different Slack channels You can configure Red Hat Advanced Cluster Security for Kubernetes to send notifications to different Slack channels so that they directly go to the right team. Procedure After you configure incoming webhooks, add an annotation similar to the following in your deployment YAML file: example.com/slack-webhook: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX Use the annotation key example.com/slack-webhook in the Label/Annotation Key For Slack Webhook field when you configure Red Hat Advanced Cluster Security for Kubernetes. After the configuration is complete, if a deployment has the annotation that you configured in the YAML file, Red Hat Advanced Cluster Security for Kubernetes sends the alert to the webhook URL you specified for that annotation. Otherwise, it sends the alert to the default webhook URL. 4.2. Configuring Red Hat Advanced Cluster Security for Kubernetes Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the webhook URL. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Slack . Click New Integration ( add icon). Enter a name for Integration Name . Enter the generated webhook URL in the Default Slack Webhook field. Select Test to test that the integration with Slack is working. Select Create to generate the configuration. 4.3. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the Slack notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment.
[ "example.com/slack-webhook: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-with-slack
Optimizing RHEL 9 for Real Time for low latency operation
Optimizing RHEL 9 for Real Time for low latency operation Red Hat Enterprise Linux for Real Time 9 Optimizing the RHEL for Real Time kernel on Red Hat Enterprise Linux Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/index
probe::stap.pass6
probe::stap.pass6 Name probe::stap.pass6 - Starting stap pass6 (cleanup) Synopsis stap.pass6 Values session the systemtap_session variable s Description pass6 fires just after the cleanup label, essentially the same spot as pass5.end
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-pass6
7.3. Collections
7.3. Collections 7.3.1. Collections A collection is a set of resources of the same type. The API provides both top-level collections and sub-collections. An example of a top-level collection is the hosts collection which contains all virtualization hosts in the environment. An example of a sub-collection is the host.nics collection which contains resources for all network interface cards attached to a host resource. 7.3.2. Listing All Resources in a Collection Obtain a listing of resources in a collection with a GET request on the collection URI obtained from the entry point. Include an Accept HTTP header to define the MIME type for the response format. 7.3.3. Listing Extended Resource Sub-Collections The API extends collection representations to include sub-collections when the Accept header includes the detail parameter. This includes multiple sub-collection requests using either separated detail parameters: Or one detail parameter that separates the sub-collection with the + operator: The API supports extended sub-collections for the following main collections. Table 7.4. Collections that use extended sub-collections Collection Extended Sub-Collection Support hosts statistics vms statistics , nics , disks Example 7.1. A request for extended statistics, NICs and disks sub-collections in the vms collection 7.3.4. Searching Collections with Queries A GET request on a "collection/search" link results in a search query of that collection. The API only returns resources within the collection that satisfy the search query constraints. 7.3.5. Maximum Results Parameter Use the max URL parameter to limit the list of results. An API search query without specifying the max parameter will return all values. Specifying the max parameter is recommended to prevent API search queries from slowing UI performance. 7.3.6. Case Sensitivity All search queries are case sensitive by default. The URL syntax provides a Boolean option to toggle case sensitivity. Example 7.2. Case insensitive search query 7.3.7. Query Syntax The API uses the URI templates to perform a search query with a GET request: The query template value refers to the search query the API directs to the collection . This query uses the same format as Red Hat Virtualization Query Language: (criteria) [sortby (element) asc|desc] The sortby clause is optional and only needed when ordering results. Table 7.5. Example search queries Collection Criteria Result hosts vms.status=up Displays a list of all hosts running virtual machines that are up . vms domain=qa.company.com Displays a list of all virtual machines running on the specified domain. vms users.name=mary Displays a list of all virtual machines belonging to users with the user name mary . events severity>normal sortby time Displays the list of all events with severity higher than normal and sorted by the time element values. events severity>normal sortby time desc Displays the list of all events with severity higher than normal and sorted by the time element values in descending order. The API requires the query template to be URL-encoded to translate reserved characters, such as operators and spaces. Example 7.3. URL-encoded search query 7.3.8. Wildcards Search queries substitute part of a value with an asterisk as a wildcard. Example 7.4. Wildcard search query for name=vm* This query would result in all virtual machines with names beginning with vm , such as vm1 , vm2 , vma or vm-webserver . Example 7.5. Wildcard search query for name=v*1 This query would result in all virtual machines with names beginning with v and ending with 1 , such as vm1 , vr1 or virtualmachine1 . 7.3.9. Pagination Some Red Hat Virtualization environments contain large collections of resources. However, the API only displays a default number of resources for one search query to a collection. To display more than the default, the API separates collections into pages via a search query containing the page command. Example 7.6. Paginating resources This example paginates resources in a collection. The URL-encoded request is: Increase the page value to view the page of results: Use the page command in conjunction with other commands in a search query. For example: This query displays the second page in a collection listing ordered by a chosen element. Important The REST APIs are stateless; it is not possible to retain a state between different requests since all requests are independent from each other. As a result, if a status change occurs between your requests, then the page results may be inconsistent. For example, if you request a specific page from a list of VMs, and a status change occurs before you can request the page, then your results may be missing entries or contain duplicated entries. 7.3.10. Creating a Resource in a Collection Create a new resource with a POST request to the collection URI containing a representation of the new resource. A POST request requires a Content-Type header. This informs the API of the representation MIME type in the body content as part of the request. Include an Accept HTTP header to define the MIME type for the response format. Each resource type has its own specific required properties. The client supplies these properties when creating a new resource. Refer to the individual resource type documentation for more details. If a required property is absent, the creation fails with a representation indicating the missing elements. 7.3.11. Asynchronous Requests The API performs asynchronous POST requests unless the user overrides them with an Expect: 201-created header. For example, certain resources, such as Virtual Machines, Disks, Snapshots and Templates, are created asynchronously. A request to create an asynchronous resource results in a 202 Accepted status. The initial document structure for a 202 Accepted resource also contains a creation_status element and link for creation status updates. For example: A GET request to the creation_status link provides a creation status update: Overriding the asynchronous resource creation requires an Expect: 201-created header:
[ "GET /ovirt-engine/api/ [collection] HTTP/1.1 Accept: [MIME type]", "GET /ovirt-engine/api/collection HTTP/1.1 Accept: application/xml; detail=subcollection", "GET /ovirt-engine/api/collection HTTP/1.1 Accept: application/xml; detail=subcollection1; detail=subcollection2", "GET /ovirt-engine/api/collection HTTP/1.1 Accept: application/xml; detail=subcollection1+subcollection2+subcollection3", "GET /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml; detail=statistics+nics+disks", "GET /ovirt-engine/api/collection?search={query} HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <collection> <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"> </resource> </collection>", "GET /ovirt-engine/api/collection;max=1 HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <collection> <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"> <name>Resource-Name</name> <description>A description of the resource</description> </resource> </collection>", "GET /ovirt-engine/api/collection;case-sensitive=false?search={query} HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/collection?search={query} HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/vms?search=name%3Dvm1 HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/vms?search=name%3Dvm* HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/vms?search=name%3Dv*1 HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/collection?search=page%201 HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/collection?search=page%202 HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/collection?search=sortby%20element%20asc%20page%202 HTTP/1.1 Accept: application/xml", "POST /ovirt-engine/api/ [collection] HTTP/1.1 Accept: [MIME type] Content-Type: [MIME type] [body]", "POST /ovirt-engine/api/collection HTTP/1.1 Accept: application/xml Content-Type: application/xml <resource> <name>Resource-Name</name> </resource> HTTP/1.1 202 Accepted Content-Type: application/xml <resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"> <name>Resource-Name</name> <creation_status> <state>pending</state> </creation status> <link rel=\"creation_status\" href=\"/ovirt-engine/api/collection/resource_id/creation_status/creation_status_id\"/> </resource>", "GET /ovirt-engine/api/collection/resource_id/creation_status/creation_status_id HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <creation id=\"creation_status_id\" href=\"/ovirt-engine/api/collection/resource_id/creation_status/creation_status_id\"> <status> <state>complete</state> </status> </creation>", "POST /ovirt-engine/api/collection HTTP/1.1 Accept: application/xml Content-Type: application/xml Expect: 201-created <resource> <name>Resource-Name</name> </resource>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-collections
7.311. expect
7.311. expect 7.311.1. RHBA-2013:1497 - expect bug fix update Updated expect packages that fix one bug are now available for Red Hat Enterprise Linux 6. The "expect" packages contains a tool for automating and testing interactive command line programs and Tk applications. Tcl is a portable and widely used scripting language, while Tk is a graphical toolkit that eases development of text-based and GUI applications. Bug Fix BZ# 1025202 Prior to this update, the "expect" utility leaked memory when used with the "-re" option, and its memory usage kept increasing indefinitely. A patch has been provided to fix this bug, and "expect" memory usage is now stable and without any leaks. Users of expect are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/expect
Chapter 92. Pragmatic AI
Chapter 92. Pragmatic AI When you think about artificial intelligence (AI), machine learning and big data might come to mind. But machine learning is only part of the picture. Artificial intelligence includes the following technologies: Robotics: The integration of technology, science, and engineering that produces machines that can perform physical tasks that are performed by humans Machine learning: The ability of a collection of algorithms to learn or improve when exposed to data without being explicitly programmed to do so Natural language processing: A subset of machine learning that processes human speech Mathematical optimization: The use of conditions and constraints to find the optimal solution to a problem Digital decisioning: The use of defined criteria, conditions, and a series of machine and human tasks to make decisions While science fiction is filled with what is referred to as artificial general intelligence (AGI), machines that perform better than people and cannot be distinguished from them and learn and evolve without human intervention or control, AGI is decades away. Meanwhile, we have pragmatic AI which is much less frightening and much more useful to us today. Pragmatic AI is a collection of AI technologies that, when combined, provide solutions to problems such as predicting customer behavior, providing automated customer service, and helping customers make purchasing decisions. Leading industry analysts report that previously organizations have struggled with AI technologies because they invested in the potential of AI rather than the reality of what AI can deliver today. AI projects were not productive and as a result investment in AI slowed and budgets for AI projects were reduced. This disillusionment with AI is often referred to as an AI winter. AI has experienced several cycles of AI winters followed by AI springs and we are now decidedly in an AI spring. Organizations are seeing the practical reality of what AI can deliver. Being pragmatic means being practical and realistic. A pragmatic approach to AI considers AI technologies that are available today, combines them where useful, and adds human intervention when needed to create solutions to real-world problems. Pragmatic AI solution example One application of pragmatic AI is in customer support. A customer files a support ticket that reports a problem, for example, a login error. A machine learning algorithm is applied to the ticket to match the ticket content with existing solutions, based on keywords or natural language processing (NLP). The keywords might appear in many solutions, some relevant and some not as relevant. You can use digital decisioning to determine which solutions to present to the customer. However, sometimes none of the solutions proposed by the algorithm are appropriate to propose to the customer. This can be because all solutions have a low confidence score or multiple solutions have a high confidence score. In cases where an appropriate solution cannot be found, the digital decisioning can involve the human support team. To find the best support person based on availability and expertise, mathematical optimization selects the best assignee for the support ticket by considering employee rostering constraints. As this example shows, you can combine machine learning to extract information from data analysis and digital decisioning to model human knowledge and experience. You can then apply mathematical optimization to schedule human assistance. This is a pattern that you can apply to other situations, for example, a credit card dispute and credit card fraud detection. These technologies use four industry standards: Case Management Model and Notation (CMMN) CMMN is used to model work methods that include various activities that might be performed in an unpredictable order depending on circumstances. CMMN models are event centered. CMMN overcomes limitations of what can be modeled with BPMN2 by supporting less structured work tasks and tasks driven by humans. By combining BPMN and CMMN you can create much more powerful models. Business Process Model and Notation (BPMN2) The BPMN2 specification is an Object Management Group (OMG) specification that defines standards for graphically representing a business process, defines execution semantics for the elements, and provides process definitions in XML format. BPMN2 can model computer and human tasks. Decision Model and Notation (DMN) Decision Model and Notation (DMN) is a standard established by the OMG for describing and modeling operational decisions. DMN defines an XML schema that enables DMN models to be shared between DMN-compliant platforms and across organizations so that business analysts and business rules developers can collaborate in designing and implementing DMN decision services. The DMN standard is similar to and can be used together with the Business Process Model and Notation (BPMN) standard for designing and modeling business processes. Predictive Model Markup Language (PMML) PMML is the language used to represent predictive models, mathematical models that use statistical techniques to uncover, or learn, patterns in large volumes of data. Predictive models use the patterns that they learn to predict the existence of patterns in new data. With PMML, you can share predictive models between applications. This data is exported as a PMML file that can be consumed by a DMN model. As a machine learning framework continues to train the model, the updated data can be saved to the existing PMML file. This means that you can use predictive models created by any application that can save the model as a PMML file. Therefore, DMN and PMML integrate well. Putting it all together This illustration shows how predictive decision automation works. Business data enters the system, for example, data from a loan application. A decision model that is integrated with a predictive model decides whether or not to approve the loan or whether additional tasks are required. A business action results, for example, a rejection letter or loan offer is sent to the customer. The section demonstrates how predictive decision automation works with Red Hat Decision Manager.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/ai-con_artificial-intelligence
15.4. Specifying Default User and Group Attributes
15.4. Specifying Default User and Group Attributes Identity Management uses a template when it creates new entries. For users, the template is very specific. Identity Management uses default values for several core attributes for IdM user accounts. These defaults can define actual values for user account attributes (such as the home directory location) or it can define the format of attribute values, such as the user name length. These settings also define the object classes assigned to users. For groups, the template only defines the assigned object classes. These default definitions are all contained in a single configuration entry for the IdM server, cn=ipaconfig,cn=etc,dc=example,dc=com . The configuration can be changed using the ipa config-mod command. Table 15.3. Default User Parameters Field Command-Line Option Descriptions Maximum user name length --maxusername Sets the maximum number of characters for user names. The default value is 32. Root for home directories --homedirectory Sets the default directory to use for user home directories. The default value is /home . Default shell --defaultshell Sets the default shell to use for users. The default value is /bin/sh . Default user group --defaultgroup Sets the default group to which all newly created accounts are added. The default value is ipausers , which is automatically created during the IdM server installation process. Default e-mail domain --emaildomain Sets the email domain to use to create email addresses based on the new accounts. The default is the IdM server domain. Search time limit --searchtimelimit Sets the maximum amount of time, in seconds, to spend on a search before the server returns results. Search size limit --searchrecordslimit Sets the maximum number of records to return in a search. User search fields --usersearch Sets the fields in a user entry that can be used as a search string. Any attribute listed has an index kept for that attribute, so setting too many attributes could affect server performance. Group search fields --groupsearch Sets the fields in a group entry that can be used as a search string. Certificate subject base Sets the base DN to use when creating subject DNs for client certificates. This is configured when the server is set up. Default user object classes --userobjectclasses Defines an object class that is used to create IdM user accounts. This can be invoked multiple times. The complete list of object classes must be given because the list is overwritten when the command is run. Default group object classes --groupobjectclasses Defines an object class that is used to create IdM group accounts. This can be invoked multiple times. The complete list of object classes must be given because the list is overwritten when the command is run. Password expiration notification --pwdexpnotify Sets how long, in days, before a password expires for the server to send a notification. Password plug-in features Sets the format of passwords that are allowed for users. 15.4.1. Viewing Attributes from the Web UI Open the IPA Server tab. Select the Configuration subtab. The complete configuration entry is shown in three sections, one for all search limits, one for user templates, and one for group templates. Figure 15.4. Setting Search Limits Figure 15.5. User Attributes Figure 15.6. Group Attributes 15.4.2. Viewing Attributes from the Command Line The config-show command shows the current configuration which applies to all new user accounts. By default, only the most common attributes are displayed; use the --all option to show the complete configuration.
[ "[bjensen@server ~]USD kinit admin [bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring_ipa_users-specifying_default_user_settings
4.9. Encryption
4.9. Encryption 4.9.1. Using LUKS Disk Encryption Linux Unified Key Setup-on-disk-format (or LUKS) allows you to encrypt partitions on your Linux computer. This is particularly important when it comes to mobile computers and removable media. LUKS allows multiple user keys to decrypt a master key, which is used for the bulk encryption of the partition. Overview of LUKS What LUKS does LUKS encrypts entire block devices and is therefore well-suited for protecting the contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary. This makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening which protects against dictionary attacks. LUKS devices contain multiple key slots, allowing users to add backup keys or passphrases. What LUKS does not do: LUKS is not well-suited for scenarios requiring many (more than eight) users to have distinct access keys to the same device. LUKS is not well-suited for applications requiring file-level encryption. Important Disk-encryption solutions like LUKS only protect the data when your system is off. Once the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who would normally have access to them. 4.9.1.1. LUKS Implementation in Red Hat Enterprise Linux Red Hat Enterprise Linux 7 utilizes LUKS to perform file system encryption. By default, the option to encrypt the file system is unchecked during the installation. If you select the option to encrypt your hard drive, you will be prompted for a passphrase that will be asked every time you boot the computer. This passphrase "unlocks" the bulk encryption key that is used to decrypt your partition. If you choose to modify the default partition table you can choose which partitions you want to encrypt. This is set in the partition table settings. The default cipher used for LUKS (see cryptsetup --help ) is aes-cbc-essiv:sha256 (ESSIV - Encrypted Salt-Sector Initialization Vector). Note that the installation program, Anaconda , uses by default XTS mode (aes-xts-plain64). The default key size for LUKS is 256 bits. The default key size for LUKS with Anaconda (XTS mode) is 512 bits. Ciphers that are available are: AES - Advanced Encryption Standard - FIPS PUB 197 Twofish (a 128-bit block cipher) Serpent cast5 - RFC 2144 cast6 - RFC 2612 4.9.1.2. Manually Encrypting Directories Warning Following this procedure will remove all data on the partition that you are encrypting. You WILL lose all your information! Make sure you backup your data to an external source before beginning this procedure! Enter runlevel 1 by typing the following at a shell prompt as root: Unmount your existing /home : If the command in the step fails, use fuser to find processes hogging /home and kill them: Verify /home is no longer mounted: Fill your partition with random data: This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to ensure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data. Initialize your partition: Open the newly encrypted device: Make sure the device is present: Create a file system: Mount the file system: Make sure the file system is visible: Add the following to the /etc/crypttab file: Edit the /etc/fstab file, removing the old entry for /home and adding the following line: Restore default SELinux security contexts: Reboot the machine: The entry in the /etc/crypttab makes your computer ask your luks passphrase on boot. Log in as root and restore your backup. You now have an encrypted partition for all of your data to safely rest while the computer is off. 4.9.1.3. Add a New Passphrase to an Existing Device Use the following command to add a new passphrase to an existing device: After being prompted for any one of the existing passprases for authentication, you will be prompted to enter the new passphrase. 4.9.1.4. Remove a Passphrase from an Existing Device Use the following command to remove a passphrase from an existing device: You will be prompted for the passphrase you want to remove and then for any one of the remaining passphrases for authentication. 4.9.1.5. Creating Encrypted Block Devices in Anaconda You can create encrypted devices during system installation. This allows you to easily configure a system with encrypted partitions. To enable block device encryption, check the Encrypt System check box when selecting automatic partitioning or the Encrypt check box when creating an individual partition, software RAID array, or logical volume. After you finish partitioning, you will be prompted for an encryption passphrase. This passphrase will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a check box. Checking this check box indicates that you would like the new passphrase to be added to an available slot in each of the pre-existing encrypted block devices. Note Checking the Encrypt System check box on the Automatic Partitioning screen and then choosing Create custom layout does not cause any block devices to be encrypted automatically. Note You can use kickstart to set a separate passphrase for each new encrypted block device. 4.9.1.6. Additional Resources For additional information on LUKS or encrypting hard drives under Red Hat Enterprise Linux 7 visit one of the following links: LUKS home page LUKS/cryptsetup FAQ LUKS - Linux Unified Key Setup Wikipedia article HOWTO: Creating an encrypted Physical Volume (PV) using a second hard drive and pvmove 4.9.2. Creating GPG Keys GPG is used to identify yourself and authenticate your communications, including those with people you do not know. GPG allows anyone reading a GPG-signed email to verify its authenticity. In other words, GPG allows someone to be reasonably certain that communications signed by you actually are from you. GPG is useful because it helps prevent third parties from altering code or intercepting conversations and altering the message. 4.9.2.1. Creating GPG Keys in GNOME To create a GPG Key in GNOME , follow these steps: Install the Seahorse utility, which makes GPG key management easier: To create a key, from the Applications Accessories menu select Passwords and Encryption Keys , which starts the application Seahorse . From the File menu select New and then PGP Key . Then click Continue . Type your full name, email address, and an optional comment describing who you are (for example: John C. Smith, [email protected] , Software Engineer). Click Create . A dialog is displayed asking for a passphrase for the key. Choose a strong passphrase but also easy to remember. Click OK and the key is created. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.2. Creating GPG Keys in KDE To create a GPG Key in KDE , follow these steps: Start the KGpg program from the main menu by selecting Applications Utilities Encryption Tool . If you have never used KGpg before, the program walks you through the process of creating your own GPG keypair. A dialog box appears prompting you to create a new key pair. Enter your name, email address, and an optional comment. You can also choose an expiration time for your key, as well as the key strength (number of bits) and algorithms. Enter your passphrase in the dialog box. At this point, your key appears in the main KGpg window. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.3. Creating GPG Keys Using the Command Line Use the following shell command: This command generates a key pair that consists of a public and a private key. Other people use your public key to authenticate and decrypt your communications. Distribute your public key as widely as possible, especially to people who you know will want to receive authentic communications from you, such as a mailing list. A series of prompts directs you through the process. Press the Enter key to assign a default value if desired. The first prompt asks you to select what kind of key you prefer: In almost all cases, the default is the correct choice. An RSA/RSA key allows you not only to sign communications, but also to encrypt files. Choose the key size: Again, the default, 2048, is sufficient for almost all users, and represents an extremely strong level of security. Choose when the key will expire. It is a good idea to choose an expiration date instead of using the default, which is none . If, for example, the email address on the key becomes invalid, an expiration date will remind others to stop using that public key. Entering a value of 1y , for example, makes the key valid for one year. (You may change this expiration date after the key is generated, if you change your mind.) Before the gpg2 application asks for signature information, the following prompt appears: Enter y to finish the process. Enter your name and email address for your GPG key. Remember this process is about authenticating you as a real individual. For this reason, include your real name. If you choose a bogus email address, it will be more difficult for others to find your public key. This makes authenticating your communications difficult. If you are using this GPG key for self-introduction on a mailing list, for example, enter the email address you use on that list. Use the comment field to include aliases or other information. (Some people use different keys for different purposes and identify each key with a comment, such as "Office" or "Open Source Projects.") At the confirmation prompt, enter the letter O to continue if all entries are correct, or use the other options to fix any problems. Finally, enter a passphrase for your secret key. The gpg2 program asks you to enter your passphrase twice to ensure you made no typing errors. Finally, gpg2 generates random data to make your key as unique as possible. Move your mouse, type random keys, or perform other tasks on the system during this step to speed up the process. Once this step is finished, your keys are complete and ready to use: The key fingerprint is a shorthand "signature" for your key. It allows you to confirm to others that they have received your actual public key without any tampering. You do not need to write this fingerprint down. To display the fingerprint at any time, use this command, substituting your email address: Your "GPG key ID" consists of 8 hex digits identifying the public key. In the example above, the GPG key ID is 1B2AFA1C . In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . Warning If you forget your passphrase, the key cannot be used and any data encrypted using that key will be lost. 4.9.2.4. About Public Key Encryption Wikipedia - Public Key Cryptography HowStuffWorks - Encryption 4.9.3. Using openCryptoki for Public-Key Cryptography openCryptoki is a Linux implementation of PKCS#11 , which is a Public-Key Cryptography Standard that defines an application programming interface ( API ) to cryptographic devices called tokens. Tokens may be implemented in hardware or software. This chapter provides an overview of the way the openCryptoki system is installed, configured, and used in Red Hat Enterprise Linux 7. 4.9.3.1. Installing openCryptoki and Starting the Service To install the basic openCryptoki packages on your system, including a software implementation of a token for testing purposes, enter the following command as root : Depending on the type of hardware tokens you intend to use, you may need to install additional packages that provide support for your specific use case. For example, to obtain support for Trusted Platform Module ( TPM ) devices, you need to install the opencryptoki-tpmtok package. See the Installing Packages section of the Red Hat Enterprise Linux 7 System Administrator's Guide for general information on how to install packages using the Yum package manager. To enable the openCryptoki service, you need to run the pkcsslotd daemon. Start the daemon for the current session by executing the following command as root : To ensure that the service is automatically started at boot time, enter the following command: See the Managing Services with systemd chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for more information on how to use systemd targets to manage services. 4.9.3.2. Configuring and Using openCryptoki When started, the pkcsslotd daemon reads the /etc/opencryptoki/opencryptoki.conf configuration file, which it uses to collect information about the tokens configured to work with the system and about their slots. The file defines the individual slots using key-value pairs. Each slot definition can contain a description, a specification of the token library to be used, and an ID of the slot's manufacturer. Optionally, the version of the slot's hardware and firmware may be defined. See the opencryptoki.conf (5) manual page for a description of the file's format and for a more detailed description of the individual keys and the values that can be assigned to them. To modify the behavior of the pkcsslotd daemon at run time, use the pkcsconf utility. This tool allows you to show and configure the state of the daemon, as well as to list and modify the currently configured slots and tokens. For example, to display information about tokens, issue the following command (note that all non-root users that need to communicate with the pkcsslotd daemon must be a part of the pkcs11 system group): See the pkcsconf (1) manual page for a list of arguments available with the pkcsconf tool. Warning Keep in mind that only fully trusted users should be assigned membership in the pkcs11 group, as all members of this group have the right to block other users of the openCryptoki service from accessing configured PKCS#11 tokens. All members of this group can also execute arbitrary code with the privileges of any other users of openCryptoki . 4.9.4. Using Smart Cards to Supply Credentials to OpenSSH The smart card is a lightweight hardware security module in a USB stick, MicroSD, or SmartCard form factor. It provides a remotely manageable secure key store. In Red Hat Enterprise Linux 7, OpenSSH supports authentication using smart cards. To use your smart card with OpenSSH, store the public key from the card to the ~/.ssh/authorized_keys file. Install the PKCS#11 library provided by the opensc package on the client. PKCS#11 is a Public-Key Cryptography Standard that defines an application programming interface (API) to cryptographic devices called tokens. Enter the following command as root : 4.9.4.1. Retrieving a Public Key from a Card To list the keys on your card, use the ssh-keygen command. Specify the shared library (OpenSC in the following example) with the -D directive. 4.9.4.2. Storing a Public Key on a Server To enable authentication using a smart card on a remote server, transfer the public key to the remote server. Do it by copying the retrieved string (key) and pasting it to the remote shell, or by storing your key to a file ( smartcard.pub in the following example) and using the ssh-copy-id command: Storing a public key without a private key file requires to use the SSH_COPY_ID_LEGACY=1 environment variable or the -f option. 4.9.4.3. Authenticating to a Server with a Key on a Smart Card OpenSSH can read your public key from a smart card and perform operations with your private key without exposing the key itself. This means that the private key does not leave the card. To connect to a remote server using your smart card for authentication, enter the following command and enter the PIN protecting your card: Replace the hostname with the actual host name to which you want to connect. To save unnecessary typing time you connect to the remote server, store the path to the PKCS#11 library in your ~/.ssh/config file: Connect by running the ssh command without any additional options: 4.9.4.4. Using ssh-agent to Automate PIN Logging In Set up environmental variables to start using ssh-agent . You can skip this step in most cases because ssh-agent is already running in a typical session. Use the following command to check whether you can connect to your authentication agent: To avoid writing your PIN every time you connect using this key, add the card to the agent by running the following command: To remove the card from ssh-agent , use the following command: Note FIPS 201-2 requires explicit user action by the Personal Identity Verification (PIV) cardholder as a condition for use of the digital signature key stored on the card. OpenSC correctly enforces this requirement. However, for some applications it is impractical to require the cardholder to enter the PIN for each signature. To cache the smart card PIN, remove the # character before the pin_cache_ignore_user_consent = true; option in the /etc/opensc-x86_64.conf . See the Cardholder Authentication for the PIV Digital Signature Key (NISTIR 7863) report for more information. 4.9.4.5. Additional Resources Setting up your hardware or software token is described in the Smart Card support in Red Hat Enterprise Linux 7 article. For more information about the pkcs11-tool utility for managing and using smart cards and similar PKCS#11 security tokens, see the pkcs11-tool(1) man page. 4.9.5. Trusted and Encrypted Keys Trusted and encrypted keys are variable-length symmetric keys generated by the kernel that utilize the kernel keyring service. The fact that the keys never appear in user space in an unencrypted form means that their integrity can be verified, which in turn means that they can be used, for example, by the extended verification module ( EVM ) to verify and confirm the integrity of a running system. User-level programs can only ever access the keys in the form of encrypted blobs . Trusted keys need a hardware component: the Trusted Platform Module ( TPM ) chip, which is used to both create and encrypt ( seal ) the keys. The TPM seals the keys using a 2048-bit RSA key called the storage root key ( SRK ). In addition to that, trusted keys may also be sealed using a specific set of the TPM 's platform configuration register ( PCR ) values. The PCR contains a set of integrity-management values that reflect the BIOS , boot loader, and operating system. This means that PCR -sealed keys can only be decrypted by the TPM on the exact same system on which they were encrypted. However, once a PCR -sealed trusted key is loaded (added to a keyring), and thus its associated PCR values are verified, it can be updated with new (or future) PCR values, so that a new kernel, for example, can be booted. A single key can also be saved as multiple blobs, each with different PCR values. Encrypted keys do not require a TPM , as they use the kernel AES encryption, which makes them faster than trusted keys. Encrypted keys are created using kernel-generated random numbers and encrypted by a master key when they are exported into user-space blobs. This master key can be either a trusted key or a user key, which is their main disadvantage - if the master key is not a trusted key, the encrypted key is only as secure as the user key used to encrypt it. 4.9.5.1. Working with keys Before performing any operations with the keys, ensure that the trusted and encrypted-keys kernel modules are loaded in the system. Consider the following points while loading the kernel modules in different RHEL kernel architectures: For RHEL kernels with the x86_64 architecture, the TRUSTED_KEYS and ENCRYPTED_KEYS code is built in as a part of the core kernel code. As a result, the x86_64 system users can use these keys without loading the trusted and encrypted-keys modules. For all other architectures, it is necessary to load the trusted and encrypted-keys kernel modules before performing any operations with the keys. To load the kernel modules, execute the following command: The trusted and encrypted keys can be created, loaded, exported, and updated using the keyctl utility. For detailed information about using keyctl , see keyctl (1) . Note In order to use a TPM (such as for creating and sealing trusted keys), it needs to be enabled and active. This can be usually achieved through a setting in the machine's BIOS or using the tpm_setactive command from the tpm-tools package of utilities. Also, the TrouSers application needs to be installed (the trousers package), and the tcsd daemon, which is a part of the TrouSers suite, running to communicate with the TPM . To create a trusted key using a TPM , execute the keyctl command with the following syntax: ~]USD keyctl add trusted name "new keylength [ options ]" keyring Using the above syntax, an example command can be constructed as follows: The above example creates a trusted key called kmk with the length of 32 bytes (256 bits) and places it in the user keyring ( @u ). The keys may have a length of 32 to 128 bytes (256 to 1024 bits). Use the show subcommand to list the current structure of the kernel keyrings: The print subcommand outputs the encrypted key to the standard output. To export the key to a user-space blob, use the pipe subcommand as follows: To load the trusted key from the user-space blob, use the add command again with the blob as an argument: The TPM -sealed trusted key can then be employed to create secure encrypted keys. The following command syntax is used for generating encrypted keys: ~]USD keyctl add encrypted name "new [ format ] key-type : master-key-name keylength " keyring Based on the above syntax, a command for generating an encrypted key using the already created trusted key can be constructed as follows: To create an encrypted key on systems where a TPM is not available, use a random sequence of numbers to generate a user key, which is then used to seal the actual encrypted keys. Then generate the encrypted key using the random-number user key: The list subcommand can be used to list all keys in the specified kernel keyring: Important Keep in mind that encrypted keys that are not sealed by a master trusted key are only as secure as the user master key (random-number key) used to encrypt them. Therefore, the master user key should be loaded as securely as possible and preferably early during the boot process. 4.9.5.2. Additional Resources The following offline and online resources can be used to acquire additional information pertaining to the use of trusted and encrypted keys. Installed Documentation keyctl (1) - Describes the use of the keyctl utility and its subcommands. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . https://www.kernel.org/doc/Documentation/security/keys-trusted-encrypted.txt - The official documentation about the trusted and encrypted keys feature of the Linux kernel. See Also Section A.1.1, "Advanced Encryption Standard - AES" provides a concise description of the Advanced Encryption Standard . Section A.2, "Public-key Encryption" describes the public-key cryptographic approach and the various cryptographic protocols it uses. 4.9.6. Using the Random Number Generator In order to be able to generate secure cryptographic keys that cannot be easily broken, a source of random numbers is required. Generally, the more random the numbers are, the better the chance of obtaining unique keys. Entropy for generating random numbers is usually obtained from computing environmental "noise" or using a hardware random number generator . The rngd daemon, which is a part of the rng-tools package, is capable of using both environmental noise and hardware random number generators for extracting entropy. The daemon checks whether the data supplied by the source of randomness is sufficiently random and then stores it in the random-number entropy pool of the kernel. The random numbers it generates are made available through the /dev/random and /dev/urandom character devices. The difference between /dev/random and /dev/urandom is that the former is a blocking device, which means it stops supplying numbers when it determines that the amount of entropy is insufficient for generating a properly random output. Conversely, /dev/urandom is a non-blocking source, which reuses the entropy pool of the kernel and is thus able to provide an unlimited supply of pseudo-random numbers, albeit with less entropy. As such, /dev/urandom should not be used for creating long-term cryptographic keys. To install the rng-tools package, issue the following command as the root user: To start the rngd daemon, execute the following command as root : To query the status of the daemon, use the following command: To start the rngd daemon with optional parameters, execute it directly. For example, to specify an alternative source of random-number input (other than /dev/hwrandom ), use the following command: The command starts the rngd daemon with /dev/hwrng as the device from which random numbers are read. Similarly, you can use the -o (or --random-device ) option to choose the kernel device for random-number output (other than the default /dev/random ). See the rngd (8) manual page for a list of all available options. To check which sources of entropy are available in a given system, execute the following command as root : Note After entering the rngd -v command, the according process continues running in background. The -b, --background option (become a daemon) is applied by default. If there is not any TPM device present, you will see only the Intel Digital Random Number Generator (DRNG) as a source of entropy. To check if your CPU supports the RDRAND processor instruction, enter the following command: Note For more information and software code examples, see Intel Digital Random Number Generator (DRNG) Software Implementation Guide. The rng-tools package also contains the rngtest utility, which can be used to check the randomness of data. To test the level of randomness of the output of /dev/random , use the rngtest tool as follows: A high number of failures shown in the output of the rngtest tool indicates that the randomness of the tested data is insufficient and should not be relied upon. See the rngtest (1) manual page for a list of options available for the rngtest utility. Red Hat Enterprise Linux 7 introduced the virtio RNG (Random Number Generator) device that provides KVM virtual machines with access to entropy from the host machine. With the recommended setup, hwrng feeds into the entropy pool of the host Linux kernel (through /dev/random ), and QEMU will use /dev/random as the source for entropy requested by guests. Figure 4.1. The virtio RNG device Previously, Red Hat Enterprise Linux 7.0 and Red Hat Enterprise Linux 6 guests could make use of the entropy from hosts through the rngd user space daemon. Setting up the daemon was a manual step for each Red Hat Enterprise Linux installation. With Red Hat Enterprise Linux 7.1, the manual step has been eliminated, making the entire process seamless and automatic. The use of rngd is now not required and the guest kernel itself fetches entropy from the host when the available entropy falls below a specific threshold. The guest kernel is then in a position to make random numbers available to applications as soon as they request them. The Red Hat Enterprise Linux installer, Anaconda , now provides the virtio-rng module in its installer image, making available host entropy during the Red Hat Enterprise Linux installation. Important To correctly decide which random number generator you should use in your scenario, see the Understanding the Red Hat Enterprise Linux random number generator interface article.
[ "telinit 1", "umount /home", "fuser -mvk /home", "grep home /proc/mounts", "shred -v --iterations=1 /dev/VG00/LV_home", "cryptsetup --verbose --verify-passphrase luksFormat /dev/VG00/LV_home", "cryptsetup luksOpen /dev/VG00/LV_home home", "ls -l /dev/mapper | grep home", "mkfs.ext3 /dev/mapper/home", "mount /dev/mapper/home /home", "df -h | grep home", "home /dev/VG00/LV_home none", "/dev/mapper/home /home ext3 defaults 1 2", "/sbin/restorecon -v -R /home", "shutdown -r now", "cryptsetup luksAddKey device", "cryptsetup luksRemoveKey device", "~]# yum install seahorse", "~]USD gpg2 --gen-key", "Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection?", "RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048)", "Please specify how long the key should be valid. 0 = key does not expire d = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years key is valid for? (0)", "Is this correct (y/N)?", "pub 1024D/1B2AFA1C 2005-03-31 John Q. Doe <[email protected]> Key fingerprint = 117C FE83 22EA B843 3E86 6486 4320 545E 1B2A FA1C sub 1024g/CEA4B22E 2005-03-31 [expires: 2006-03-31]", "~]USD gpg2 --fingerprint [email protected]", "~]# yum install opencryptoki", "~]# systemctl start pkcsslotd", "~]# systemctl enable pkcsslotd", "~]USD pkcsconf -t", "~]# yum install opensc", "~]USD ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so ssh-rsa AAAAB3NzaC1yc[...]+g4Mb9", "~]USD ssh-copy-id -f -i smartcard.pub user@hostname user@hostname's password: Number of key(s) added: 1 Now try logging into the machine, with: \"ssh user@hostname\" and check to make sure that only the key(s) you wanted were added.", "[localhost ~]USD ssh -I /usr/lib64/pkcs11/opensc-pkcs11.so hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD", "Host hostname PKCS11Provider /usr/lib64/pkcs11/opensc-pkcs11.so", "[localhost ~]USD ssh hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD", "~]USD ssh-add -l Could not open a connection to your authentication agent. ~]USD eval `ssh-agent`", "~]USD ssh-add -s /usr/lib64/pkcs11/opensc-pkcs11.so Enter PIN for 'Test (UserPIN)': Card added: /usr/lib64/pkcs11/opensc-pkcs11.so", "~]USD ssh-add -e /usr/lib64/pkcs11/opensc-pkcs11.so Card removed: /usr/lib64/pkcs11/opensc-pkcs11.so", "~]# modprobe trusted encrypted-keys", "~]USD keyctl add trusted kmk \"new 32\" @u 642500861", "~]USD keyctl show Session Keyring -3 --alswrv 500 500 keyring: _ses 97833714 --alswrv 500 -1 \\_ keyring: _uid.1000 642500861 --alswrv 500 500 \\_ trusted: kmk", "~]USD keyctl pipe 642500861 > kmk.blob", "~]USD keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824", "~]USD keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175", "~]USD keyctl add user kmk-user \"`dd if=/dev/urandom bs=1 count=32 2>/dev/null`\" @u 427069434", "~]USD keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758", "~]USD keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key", "~]# yum install rng-tools", "~]# systemctl start rngd", "~]# systemctl status rngd", "~]# rngd --rng-device= /dev/hwrng", "~]# rngd -vf Unable to open file: /dev/tpm0 Available entropy sources: DRNG", "~]USD cat /proc/cpuinfo | grep rdrand", "~]USD cat /dev/random | rngtest -c 1000 rngtest 5 Copyright (c) 2004 by Henrique de Moraes Holschuh This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. rngtest: starting FIPS tests rngtest: bits received from input: 20000032 rngtest: FIPS 140-2 successes: 998 rngtest: FIPS 140-2 failures: 2 rngtest: FIPS 140-2(2001-10-10) Monobit: 0 rngtest: FIPS 140-2(2001-10-10) Poker: 0 rngtest: FIPS 140-2(2001-10-10) Runs: 0 rngtest: FIPS 140-2(2001-10-10) Long run: 2 rngtest: FIPS 140-2(2001-10-10) Continuous run: 0 rngtest: input channel speed: (min=1.171; avg=8.453; max=11.374)Mibits/s rngtest: FIPS tests speed: (min=15.545; avg=143.126; max=157.632)Mibits/s rngtest: Program run time: 2390520 microseconds" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-encryption
Chapter 5. Deploying standalone Multicloud Object Gateway
Chapter 5. Deploying standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and select the same. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator by using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 5.3. Creating standalone Multicloud Object Gateway on IBM Z You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, see Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_z/deploy-standalone-multicloud-object-gateway-ibm-z
21.2. Canceling Event Notifications in the Administration Portal
21.2. Canceling Event Notifications in the Administration Portal A user has configured some unnecessary email notifications and wants them canceled. Canceling Event Notifications Click Administration Users . Click the user's User Name to open the details view. Click the Event Notifier tab to list events for which the user receives email notifications. Click Manage Events . Use the Expand All button, or the subject-specific expansion buttons, to view the events. Clear the appropriate check boxes to remove notification for that event. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/cancelling_event_notifications
Chapter 2. Installation
Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.2 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 subscriptions listed at https://access.redhat.com/solutions/472793 . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional channel, which are listed in Section 2.1.2, "Packages from the Optional Channel" , cannot be installed from the ISO image. Note Packages that require the Optional channel cannot be installed from the ISO image. A list of packages that require enabling of the Optional channel is provided in Section 2.1.2, "Packages from the Optional Channel" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Channel Some of the Red Hat Software Collections 3.2 packages require the Optional channel to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this channel, see the relevant Knowledgebase articles at https://access.redhat.com/solutions/392003 . Packages from Software Collections for Red Hat Enterprise Linux 6 that require the Optional channel to be enabled are listed in the following table. Table 2.1. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 6 Package from a Software Collection Required Package from the Optional Channel devtoolset-6-dyninst-testsuite glibc-static devtoolset-7-dyninst-testsuite glibc-static devtoolset-8-dyninst-testsuite glibc-static rh-git29-git-all cvsps, perl-Net-SMTP-SSL rh-git29-git-cvs cvsps rh-git29-git-email perl-Net-SMTP-SSL rh-git29-perl-Git-SVN perl-YAML, subversion-perl rh-mariadb101-boost-devel libicu-devel rh-mariadb101-boost-examples libicu-devel rh-mariadb101-boost-static libicu-devel rh-mongodb32-boost-devel libicu-devel rh-mongodb32-boost-examples libicu-devel rh-mongodb32-boost-static libicu-devel rh-mongodb32-yaml-cpp-devel libicu-devel rh-mongodb34-boost-devel libicu-devel rh-mongodb34-boost-examples libicu-devel rh-mongodb34-boost-static libicu-devel rh-mongodb34-yaml-cpp-devel libicu-devel rh-php70-php-imap libc-client rh-php70-php-recode recode Software Collections packages that require the Optional channel in Red Hat Enterprise Linux 7 are listed in the table below. Table 2.2. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Channel devtoolset-7-dyninst-testsuite glibc-static devtoolset-7-gcc-plugin-devel libmpc-devel devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-gcc-plugin-devel libmpc-devel httpd24-mod_ldap apr-util-ldap rh-eclipse46 ruby-doc rh-eclipse46-eclipse-dltk-ruby ruby-doc rh-eclipse46-eclipse-dltk-sdk ruby-doc rh-eclipse46-eclipse-dltk-tests ruby-doc rh-eclipse46-icu4j-javadoc java-1.8.0-openjdk-javadoc rh-eclipse46-stringtemplate-javadoc java-1.8.0-openjdk-javadoc rh-git218-git-all cvsps, subversion-perl rh-git218-git-cvs cvsps rh-git218-git-svn subversion-perl rh-git218-perl-Git-SVN subversion-perl rh-git29-git-all cvsps rh-git29-git-cvs cvsps rh-git29-perl-Git-SVN subversion-perl Note that packages from the Optional channel are not supported. For details, see the Knowledgebase article at https://access.redhat.com/articles/1150793 . 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.2 requires the removal of any earlier pre-release versions, including Beta releases. If you have installed any version of Red Hat Software Collections 3.2, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections. The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections 3.2 Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install php54 and rh-mariadb100 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl526-perl-CPAN and rh-perl526-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby25-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see https://access.redhat.com/solutions/9907 . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide .
[ "rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms>", "~]# yum install rh-php72 rh-mariadb102", "~]# yum install rh-perl526-perl-CPAN rh-perl526-perl-Archive-Tar", "~]# debuginfo-install rh-ruby25-ruby" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-Installation
Chapter 59. JmxTransQueryTemplate schema reference
Chapter 59. JmxTransQueryTemplate schema reference Used in: JmxTransSpec Property Description targetMBean If using wildcards instead of a specific MBean then the data is gathered from multiple MBeans. Otherwise if specifying an MBean then data is gathered from that specified MBean. string attributes Determine which attributes of the targeted MBean should be included. string array outputs List of the names of output definitions specified in the spec.kafka.jmxTrans.outputDefinitions that have defined where JMX metrics are pushed to, and in which data format. string array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-JmxTransQueryTemplate-reference
Deploying OpenShift Data Foundation using IBM Power
Deploying OpenShift Data Foundation using IBM Power Red Hat OpenShift Data Foundation 4.9 Instructions on deploying Red Hat OpenShift Data Foundation on IBM Power Red Hat Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_power/index
Index
Index Symbols /etc/fstab file mounting file systems with, Mounting File Systems Automatically with /etc/fstab updating, Updating /etc/fstab /etc/group file group, role in, /etc/group user account, role in, /etc/group /etc/gshadow file group, role in, /etc/gshadow user account, role in, /etc/gshadow /etc/mtab file, Viewing /etc/mtab /etc/passwd file group, role in, /etc/passwd user account, role in, /etc/passwd /etc/shadow file group, role in, /etc/shadow user account, role in, /etc/shadow /proc/mdstat file, Checking Array Status With /proc/mdstat /proc/mounts file, Viewing /proc/mounts A abuse, resource, What Barriers Are in Place To Prevent Abuse of Resources account (see user account) ATA disk drive adding, Adding ATA Disk Drives automation, Automation overview of, Automate Everything B backups AMANDA backup software, The Advanced Maryland Automatic Network Disk Archiver (AMANDA) building software, Backup Software: Buy Versus Build buying software, Backup Software: Buy Versus Build data-related issues surrounding, Different Data: Different Backup Needs introduction to, Backups media types, Backup Media disk, Disk network, Network tape, Tape restoration issues, Restoration Issues bare metal restorations, Restoring From Bare Metal testing restoration, Testing Backups schedule, modifying, Modifying the Backup Schedule storage of, Storage of Backups technologies used, Backup Technologies cpio, cpio dump, dump/restore: Not Recommended for Mounted File Systems! tar, tar types of, Types of Backups differential backups, Differential Backups full backups, Full Backups incremental backups, Incremental Backups bandwidth-related resources (see resources, system, bandwidth) bash shell, automation and, Automation business, knowledge of, Know Your Business C cache memory, Cache Memory capacity planning, Monitoring System Capacity CD-ROM file system (see ISO 9660 file system) centralized home directory, Home Directories chage command, User Account and Group Applications change control, Change Control chfn command, User Account and Group Applications chgrp command, File Permission Applications chmod command, File Permission Applications chown command, File Permission Applications chpasswd command, User Account and Group Applications color laser printers, Color Laser Printers communication necessity of, Communicate as Much as Possible Red Hat Enterprise Linux-specific information, Documentation and Communication CPU power (see resources, system, processing power) D daisy-wheel printers (see impact printers) data shared access to, Who Can Access Shared Data , Where Users Access Shared Data global ownership issues, Global Ownership Issues device alternative to device names, Alternatives to Device File Names device names, alternatives to, Alternatives to Device File Names devlabel, naming with, Using devlabel file names, Device Files file system labels, File System Labels labels, file system, File System Labels naming convention, Device Naming Conventions naming with devlabel, Using devlabel partition, Partition type, Device Type unit, Unit whole-device access, Whole-Device Access devlabel, Using devlabel df command, Issuing the df Command disaster planning, Planning for Disaster power, backup, Backup Power generator, Providing Power For the Few Hours (and Beyond) motor-generator set, Providing Power For the Few Seconds outages, extended, Planning for Extended Outages UPS, Providing Power For the Few Minutes types of disasters, Types of Disasters air conditioning, Heating, Ventilation, and Air Conditioning application failures, Application Failures building integrity, Building Integrity electrical, Electricity electricity, quality of, Power Quality electricity, security of, The Security of Your Power environmental failures, Environmental Failures hardware failures, Hardware Failures heating, Heating, Ventilation, and Air Conditioning human errors, Human Errors HVAC, Heating, Ventilation, and Air Conditioning improper repairs, Improperly-Repaired Hardware improperly-used applications, Improper Use of Applications maintenance-related errors, Mistakes Made During Maintenance misconfiguration errors, Misconfiguration Errors mistakes during procedures, Mistakes Made During Procedures operating system crashes, Crashes operating system failures, Operating System Failures operating system hangs, Hangs operator errors, Operations Personnel Errors procedural errors, Failure to Follow Procedures service technician errors, Service Technician Errors software failures, Software Failures system administrator errors, System Administrator Errors user errors, End-User Errors ventilation, Heating, Ventilation, and Air Conditioning weather-related, Weather and the Outside World disaster recovery backup site, Backup Sites: Cold, Warm, and Hot network connectivity to, Network Connectivity to the Backup Site staffing of, Backup Site Staffing backups, availability of, Availability of Backups end of, Moving Back Toward Normalcy hardware availability, Hardware and Software Availability introduction to, Disaster Recovery plan, creating, testing, implementing, Creating, Testing, and Implementing a Disaster Recovery Plan software availability, Hardware and Software Availability disk drives, Hard Drives disk quotas enabling, Enabling Disk Quotas introduction to, Implementing Disk Quotas management of, Managing Disk Quotas overview of, Some Background on Disk Quotas block usage tracking, Tracks Disk Block Usage file-system specific, Per-File-System Implementation grace period, Grace Periods group specific, Per-Group Space Accounting hard limits, Hard Limits inode usage tracking, Tracks Disk Inode Usage soft limits, Soft Limits user specific, Per-User Space Accounting disk space (see storage) documentation Red Hat Enterprise Linux-specific information, Documentation and Communication documentation, necessity of, Document Everything dot-matrix printers (see impact printers) E engineering, social, The Risks of Social Engineering execute permission, User Accounts, Groups, and Permissions EXT2 file system, EXT2 EXT3 file system, EXT3 F file names device, Device Files file system labels, File System Labels free command, free , Red Hat Enterprise Linux-Specific Information G GID, Usernames and UIDs, Groups and GIDs gnome-system-monitor command, The GNOME System Monitor -- A Graphical top gpasswd command, User Account and Group Applications group files controlling, Files Controlling User Accounts and Groups /etc/group, /etc/group /etc/gshadow, /etc/gshadow /etc/passwd, /etc/passwd /etc/shadow, /etc/shadow GID, Usernames and UIDs, Groups and GIDs management of, Managing User Accounts and Resource Access permissions related to, User Accounts, Groups, and Permissions execute, User Accounts, Groups, and Permissions read, User Accounts, Groups, and Permissions setgid, User Accounts, Groups, and Permissions setuid, User Accounts, Groups, and Permissions sticky bit, User Accounts, Groups, and Permissions write, User Accounts, Groups, and Permissions shared data access using, Shared Groups and Data structure, determining, Determining Group Structure system GIDs, Usernames and UIDs, Groups and GIDs system UIDs, Usernames and UIDs, Groups and GIDs tools for managing, User Account and Group Applications gpasswd command, User Account and Group Applications groupadd command, User Account and Group Applications groupdel command, User Account and Group Applications groupmod command, User Account and Group Applications grpck command, User Account and Group Applications UID, Usernames and UIDs, Groups and GIDs group ID (see GID) groupadd command, User Account and Group Applications groupdel command, User Account and Group Applications groupmod command, User Account and Group Applications grpck command, User Account and Group Applications H hard drives, Hard Drives hardware failures of, Hardware Failures service contracts, Service Contracts availability of parts, Parts Availability budget for, Available Budget coverage hours, Hours of Coverage depot service, Depot Service drop-off service, Depot Service hardware covered, Hardware to be Covered on-site technician, Zero Response Time -- Having an On-Site Technician response time, Response Time walk-in service, Depot Service skills necessary to repair, Having the Skills spares keeping, Keeping Spare Hardware stock, quantities, How Much to Stock? stock, selection of, What to Stock? swapping hardware, Spares That Are Not Spares home directory centralized, Home Directories I IDE interface overview of, IDE/ATA impact printers, Impact Printers consumables, Impact Printer Consumables daisy-wheel, Impact Printers dot-matrix, Impact Printers line, Impact Printers inkjet printers, Inkjet Printers consumables, Inkjet Consumables intrusion detection systems, Security iostat command, The Sysstat Suite of Resource Monitoring Tools , Monitoring Bandwidth on Red Hat Enterprise Linux ISO 9660 file system, ISO 9660 L laser printers, Laser Printers color, Color Laser Printers consumables, Laser Printer Consumables line printers (see impact printers) logical volume management (see LVM) LVM contrasted with RAID, With LVM, Why Use RAID? data migration, Data Migration logical volume resizing, Logical Volume Resizing migration, data, Data Migration overview of, Logical Volume Management resizing, logical volume, Logical Volume Resizing storage grouping, Physical Storage Grouping M managing printers, Printers and Printing memory monitoring of, Monitoring Memory resource utilization of, Physical and Virtual Memory virtual memory, Basic Virtual Memory Concepts backing store, Backing Store -- the Central Tenet of Virtual Memory overview of, Virtual Memory in Simple Terms page faults, Page Faults performance of, Virtual Memory Performance Implications performance, best case, Best Case Performance Scenario performance, worst case, Worst Case Performance Scenario swapping, Swapping virtual address space, Virtual Memory: The Details working set, The Working Set monitoring resources, Resource Monitoring system performance, System Performance Monitoring monitoring statistics bandwidth-related, Monitoring Bandwidth CPU-related, Monitoring CPU Power memory-related, Monitoring Memory selection of, What to Monitor? storage-related, Monitoring Storage mount points (see storage, file system, mount point) mounting file systems (see storage, file system, mounting) mpstat command, The Sysstat Suite of Resource Monitoring Tools MSDOS file system, MSDOS N NFS, NFS O OProfile, Red Hat Enterprise Linux-Specific Information , OProfile P page description languages (PDL), Printer Languages and Technologies Interpress, Printer Languages and Technologies PCL, Printer Languages and Technologies PostScript, Printer Languages and Technologies page faults, Page Faults PAM, Security partition, Partition attributes of, Partition Attributes geometry, Geometry type, Partition Type type field, Partition Type Field creation of, Partitioning , Partitioning extended, Extended Partitions logical, Logical Partitions overview of, Partitions/Slices primary, Primary Partitions passwd command, User Account and Group Applications password, Passwords aging, Password Aging big character set used in, Expanded Character Set longer, Longer Passwords memorable, Memorable personal info used in, Personal Information repeatedly used, The Same Password for Multiple Systems shortness of, Short Passwords small character set used in, Limited Character Set strong, Strong Passwords weak, Weak Passwords word tricks used in, Simple Word Tricks words used in, Recognizable Words written, Passwords on Paper perl, automation and, Automation permissions, User Accounts, Groups, and Permissions tools for managing chgrp command, File Permission Applications chmod command, File Permission Applications chown command, File Permission Applications philosophy of system administration, The Philosophy of System Administration physical memory (see memory) planning, importance of, Plan Ahead Pluggable Authentication Modules (see PAM) printers additional resources, Additional Resources color, Inkjet Printers CMYK, Inkjet Printers inkjet, Inkjet Printers laser, Color Laser Printers considerations, Printing Considerations duplex, Function languages (see page description languages (PDL)) local, Networked Versus Local Printers managing, Printers and Printing networked, Networked Versus Local Printers types, Types of Printers color laser, Color Laser Printers daisy-wheel, Impact Printers dot-matrix, Impact Printers dye-sublimation, Other Printer Types impact, Impact Printers inkjet, Inkjet Printers laser, Laser Printers line, Impact Printers solid ink, Other Printer Types thermal wax, Other Printer Types processing power, resources related to (see resources, system, processing power) Q quota, disk (see disk quotas) R RAID arrays management of, Day to Day Management of RAID Arrays raidhotadd command, use of, Rebuilding a RAID array rebuilding, Rebuilding a RAID array status, checking, Checking Array Status With /proc/mdstat arrays, creating, Creating RAID Arrays after installation time, After Red Hat Enterprise Linux Has Been Installed at installation time, While Installing Red Hat Enterprise Linux contrasted with LVM, With LVM, Why Use RAID? creating arrays (see RAID, arrays, creating) implementations of, RAID Implementations hardware RAID, Hardware RAID software RAID, Software RAID introduction to, RAID-Based Storage levels of, RAID Levels nested RAID, Nested RAID Levels RAID 0, RAID 0 RAID 0, advantages of, RAID 0 RAID 0, disadvantages of, RAID 0 RAID 1, RAID 1 RAID 1, advantages of, RAID 1 RAID 1, disadvantages of, RAID 1 RAID 5, RAID 5 RAID 5, advantages of, RAID 5 RAID 5, disadvantages of, RAID 5 nested RAID, Nested RAID Levels overview of, Basic Concepts raidhotadd command, use of, Rebuilding a RAID array RAM, Main Memory -- RAM read permission, User Accounts, Groups, and Permissions recursion (see recursion) Red Hat Enterprise Linux-specific information automation, Automation backup technologies AMANDA, The Advanced Maryland Automatic Network Disk Archiver (AMANDA) cpio, cpio dump, dump/restore: Not Recommended for Mounted File Systems! tar, tar backups technologies overview of, Backup Technologies bash shell, Automation communication, Documentation and Communication disaster recovery, Red Hat Enterprise Linux-Specific Information documentation, Documentation and Communication intrusion detection systems, Security PAM, Security perl, Automation resource monitoring bandwidth, Red Hat Enterprise Linux-Specific Information CPU power, Red Hat Enterprise Linux-Specific Information memory, Red Hat Enterprise Linux-Specific Information resource monitoring tools, Red Hat Enterprise Linux-Specific Information free, Red Hat Enterprise Linux-Specific Information , Red Hat Enterprise Linux-Specific Information iostat, Monitoring Bandwidth on Red Hat Enterprise Linux OProfile, Red Hat Enterprise Linux-Specific Information sar, Monitoring Bandwidth on Red Hat Enterprise Linux , Monitoring CPU Utilization on Red Hat Enterprise Linux , Red Hat Enterprise Linux-Specific Information Sysstat, Red Hat Enterprise Linux-Specific Information top, Red Hat Enterprise Linux-Specific Information , Monitoring CPU Utilization on Red Hat Enterprise Linux vmstat, Red Hat Enterprise Linux-Specific Information , Monitoring Bandwidth on Red Hat Enterprise Linux , Monitoring CPU Utilization on Red Hat Enterprise Linux , Red Hat Enterprise Linux-Specific Information RPM, Security security, Security shell scripts, Automation software support, Software Support support, software, Software Support resource abuse, What Barriers Are in Place To Prevent Abuse of Resources resource monitoring, Resource Monitoring bandwidth, Monitoring Bandwidth capacity planning, Monitoring System Capacity concepts behind, Basic Concepts CPU power, Monitoring CPU Power memory, Monitoring Memory storage, Monitoring Storage system capacity, Monitoring System Capacity system performance, System Performance Monitoring tools free, free GNOME System Monitor, The GNOME System Monitor -- A Graphical top iostat, The Sysstat Suite of Resource Monitoring Tools mpstat, The Sysstat Suite of Resource Monitoring Tools OProfile, OProfile sa1, The Sysstat Suite of Resource Monitoring Tools sa2, The Sysstat Suite of Resource Monitoring Tools sadc, The Sysstat Suite of Resource Monitoring Tools sar, The Sysstat Suite of Resource Monitoring Tools , The sar command Sysstat, The Sysstat Suite of Resource Monitoring Tools top, top vmstat, vmstat tools used, Red Hat Enterprise Linux-Specific Information what to monitor, What to Monitor? resources, importance of, Know Your Resources resources, system bandwidth, Bandwidth and Processing Power buses role in, Buses buses, examples of, Examples of Buses capacity, increasing, Increase the Capacity datapaths, examples of, Examples of Datapaths datapaths, role in, Datapaths load, reducing, Reduce the Load load, spreading, Spread the Load monitoring of, Monitoring Bandwidth overview of, Bandwidth problems related to, Potential Bandwidth-Related Problems solutions to problems with, Potential Bandwidth-Related Solutions memory (see memory) processing power, Bandwidth and Processing Power application overhead, reducing, Reducing Application Overhead application use of, Applications applications, eliminating, Eliminating Applications Entirely capacity, increasing, Increasing the Capacity consumers of, Consumers of Processing Power CPU, upgrading, Upgrading the CPU facts related to, Facts About Processing Power load, reducing, Reducing the Load monitoring of, Monitoring CPU Power O/S overhead, reducing, Reducing Operating System Overhead operating system use of, The Operating System overview of, Processing Power shortage of, improving, Improving a CPU Shortage SMP, Is Symmetric Multiprocessing Right for You? symmetric multiprocessing, Is Symmetric Multiprocessing Right for You? upgrading, Upgrading the CPU storage (see storage) RPM, Security RPM Package Manager (see RPM) S sa1 command, The Sysstat Suite of Resource Monitoring Tools sa2 command, The Sysstat Suite of Resource Monitoring Tools sadc command, The Sysstat Suite of Resource Monitoring Tools sar command, The Sysstat Suite of Resource Monitoring Tools , The sar command , Monitoring Bandwidth on Red Hat Enterprise Linux , Monitoring CPU Utilization on Red Hat Enterprise Linux , Red Hat Enterprise Linux-Specific Information reports, reading, Reading sar Reports SCSI disk drive adding, Adding SCSI Disk Drives SCSI interface overview of, SCSI security importance of, Security Cannot be an Afterthought Red Hat Enterprise Linux-specific information, Security setgid permission, Security , User Accounts, Groups, and Permissions setuid permission, Security , User Accounts, Groups, and Permissions shell scripts, Automation SMB, SMB SMP, Is Symmetric Multiprocessing Right for You? social engineering, risks of, The Risks of Social Engineering software support for documentation, Documentation email support, Web or Email Support on-site support, On-Site Support overview, Getting Help -- Software Support self support, Self Support telephone support, Telephone Support Web support, Web or Email Support sticky bit permission, User Accounts, Groups, and Permissions storage adding, Adding Storage , Adding Storage /etc/fstab, updating, Updating /etc/fstab ATA disk drive, Adding ATA Disk Drives backup schedule, modifying, Modifying the Backup Schedule configuration, updating, Updating System Configuration formatting, Formatting the Partition(s) , Formatting the Partition(s) hardware, installing, Installing the Hardware partitioning, Partitioning , Partitioning SCSI disk drive, Adding SCSI Disk Drives deploying, Making the Storage Usable disk quotas, Disk Quota Issues (see disk quotas) file system, File Systems , File System Basics /etc/mtab file, Viewing /etc/mtab /proc/mounts file, Viewing /proc/mounts access control, Access Control access times, Tracking of File Creation, Access, Modification Times accounting, space, Accounting of Space Utilized creation times, Tracking of File Creation, Access, Modification Times df command, using, Issuing the df Command directories, Hierarchical Directory Structure display of mounted, Seeing What is Mounted enabling access to, Enabling Storage Access EXT2, EXT2 EXT3, EXT3 file-based, File-Based Storage hierarchical directory, Hierarchical Directory Structure ISO 9660, ISO 9660 modification times, Tracking of File Creation, Access, Modification Times mount point, Mount Points mounting, Mounting File Systems mounting with /etc/fstab file, Mounting File Systems Automatically with /etc/fstab MSDOS, MSDOS space accounting, Accounting of Space Utilized structure, directory, Directory Structure VFAT, VFAT file-related issues, File-Related Issues file access, File Access file sharing, File Sharing management of, Managing Storage , Storage Management Day-to-Day application usage, Excessive Usage by an Application excessive use of, Excessive Usage by a User free space monitoring, Monitoring Free Space growth, normal, Normal Growth in Usage user issues, Handling a User's Excessive Usage mass-storage devices access arm movement, Access Arm Movement access arms, Access Arms addressing concepts, Storage Addressing Concepts addressing, block-based, Block-Based Addressing addressing, geometry-based, Geometry-Based Addressing block-based addressing, Block-Based Addressing command processing, Command Processing Time cylinder, Cylinder disk platters, Disk Platters electrical limitations of, Mechanical/Electrical Limitations geometry, problems with, Problems with Geometry-Based Addressing geometry-based addressing, Geometry-Based Addressing head, Head heads, Data reading/writing device heads reading, Heads Reading/Writing Data heads writing, Heads Reading/Writing Data I/O loads, performance, I/O Loads and Performance I/O loads, reads, Reads Versus Writes I/O loads, writes, Reads Versus Writes I/O locality, Locality of Reads/Writes IDE interface, IDE/ATA industry-standard interfaces, Present-Day Industry-Standard Interfaces interfaces for, Mass Storage Device Interfaces interfaces, historical, Historical Background interfaces, industry-standard, Present-Day Industry-Standard Interfaces latency, rotational, Rotational Latency mechanical limitations of, Mechanical/Electrical Limitations movement, access arm, Access Arm Movement overview of, An Overview of Storage Hardware performance of, Hard Drive Performance Characteristics platters, disk, Disk Platters processing, command, Command Processing Time readers versus writers, Multiple Readers/Writers rotational latency, Rotational Latency SCSI interface, SCSI sector, Sector monitoring of, Monitoring Storage network-accessible, Network-Accessible Storage , Network-Accessible Storage Under Red Hat Enterprise Linux NFS, NFS SMB, SMB partition attributes of, Partition Attributes extended, Extended Partitions geometry of, Geometry logical, Logical Partitions overview of, Partitions/Slices primary, Primary Partitions type field, Partition Type Field type of, Partition Type patterns of access, Storage Access Patterns RAID-based (see RAID) removing, Removing Storage , Removing Storage /etc/fstab, removing from, Remove the Disk Drive's Partitions From /etc/fstab data, removing, Moving Data Off the Disk Drive erasing contents, Erase the Contents of the Disk Drive , Erase the Contents of the Disk Drive umount command, use of, Terminating Access With umount technologies, The Storage Spectrum backup storage, Off-Line Backup Storage cache memory, Cache Memory CPU registers, CPU Registers disk drive, Hard Drives hard drive, Hard Drives L1 cache, Cache Levels L2 cache, Cache Levels main memory, Main Memory -- RAM off-line storage, Off-Line Backup Storage RAM, Main Memory -- RAM technologies, advanced, Advanced Storage Technologies swapping, Swapping symmetric multiprocessing, Is Symmetric Multiprocessing Right for You? Sysstat, Red Hat Enterprise Linux-Specific Information , The Sysstat Suite of Resource Monitoring Tools system administration philosophy of, The Philosophy of System Administration automation, Automate Everything business, Know Your Business communication, Communicate as Much as Possible documentation, Document Everything planning, Plan Ahead resources, Know Your Resources security, Security Cannot be an Afterthought social engineering, risks of, The Risks of Social Engineering unexpected occurrences, Expect the Unexpected users, Know Your Users system performance monitoring, System Performance Monitoring system resources (see resources, system) T tools groups, managing (see group, tools for managing) resource monitoring, Red Hat Enterprise Linux-Specific Information free, free GNOME System Monitor, The GNOME System Monitor -- A Graphical top iostat, The Sysstat Suite of Resource Monitoring Tools mpstat, The Sysstat Suite of Resource Monitoring Tools OProfile, OProfile sa1, The Sysstat Suite of Resource Monitoring Tools sa2, The Sysstat Suite of Resource Monitoring Tools sadc, The Sysstat Suite of Resource Monitoring Tools sar, The Sysstat Suite of Resource Monitoring Tools , The sar command Sysstat, The Sysstat Suite of Resource Monitoring Tools top, top vmstat, vmstat user accounts, managing (see user account, tools for managing) top command, Red Hat Enterprise Linux-Specific Information , top , Monitoring CPU Utilization on Red Hat Enterprise Linux U UID, Usernames and UIDs, Groups and GIDs unexpected, preparation for, Expect the Unexpected user account access control, Access Control Information files controlling, Files Controlling User Accounts and Groups /etc/group, /etc/group /etc/gshadow, /etc/gshadow /etc/passwd, /etc/passwd /etc/shadow, /etc/shadow GID, Usernames and UIDs, Groups and GIDs home directory centralized, Home Directories management of, Managing User Accounts and Resource Access , Managing User Accounts , Managing Accounts and Resource Access Day-to-Day job changes, Job Changes new hires, New Hires terminations, Terminations password, Passwords aging, Password Aging big character set used in, Expanded Character Set longer, Longer Passwords memorable, Memorable personal information used in, Personal Information repeatedly used, The Same Password for Multiple Systems shortness of, Short Passwords small character set used in, Limited Character Set strong, Strong Passwords weak, Weak Passwords word tricks used in, Simple Word Tricks words used in, Recognizable Words written, Passwords on Paper permissions related to, User Accounts, Groups, and Permissions execute, User Accounts, Groups, and Permissions read, User Accounts, Groups, and Permissions setgid, User Accounts, Groups, and Permissions setuid, User Accounts, Groups, and Permissions sticky bit, User Accounts, Groups, and Permissions write, User Accounts, Groups, and Permissions resources, management of, Managing User Resources shared data access, Who Can Access Shared Data system GIDs, Usernames and UIDs, Groups and GIDs system UIDs, Usernames and UIDs, Groups and GIDs tools for managing, User Account and Group Applications chage command, User Account and Group Applications chfn command, User Account and Group Applications chpasswd command, User Account and Group Applications passwd command, User Account and Group Applications useradd command, User Account and Group Applications userdel command, User Account and Group Applications usermod command, User Account and Group Applications UID, Usernames and UIDs, Groups and GIDs username, The Username changes to, Dealing with Name Changes collisions in naming, Dealing with Collisions naming convention, Naming Conventions user ID (see UID) useradd command, User Account and Group Applications userdel command, User Account and Group Applications usermod command, User Account and Group Applications username, The Username changing, Dealing with Name Changes collisions between, Dealing with Collisions naming convention, Naming Conventions users importance of, Know Your Users V VFAT file system, VFAT virtual address space, Virtual Memory: The Details virtual memory (see memory) vmstat command, Red Hat Enterprise Linux-Specific Information , vmstat , Monitoring Bandwidth on Red Hat Enterprise Linux , Monitoring CPU Utilization on Red Hat Enterprise Linux , Red Hat Enterprise Linux-Specific Information W watch command, free working set, The Working Set write permission, User Accounts, Groups, and Permissions
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/ix01
23.15.2. Adding Partitions
23.15.2. Adding Partitions To add a new partition, select the Create button. A dialog box appears (refer to Figure 23.39, "Creating a New Partition" ). Note You must dedicate at least one partition for this installation, and optionally more. For more information, refer to Appendix A, An Introduction to Disk Partitions . Figure 23.39. Creating a New Partition Mount Point : Enter the partition's mount point. For example, if this partition should be the root partition, enter / ; enter /boot for the /boot partition, and so on. You can also use the pull-down menu to choose the correct mount point for your partition. For a swap partition the mount point should not be set - setting the filesystem type to swap is sufficient. File System Type : Using the pull-down menu, select the appropriate file system type for this partition. For more information on file system types, refer to Section 23.15.2.1, "File System Types" . Allowable Drives : This field contains a list of the hard disks installed on your system. If a hard disk's box is highlighted, then a desired partition can be created on that hard disk. If the box is not checked, then the partition will never be created on that hard disk. By using different checkbox settings, you can have anaconda place partitions where you need them, or let anaconda decide where partitions should go. Size (MB) : Enter the size (in megabytes) of the partition. Note, this field starts with 200 MB; unless changed, only a 200 MB partition will be created. Additional Size Options : Choose whether to keep this partition at a fixed size, to allow it to "grow" (fill up the available hard drive space) to a certain point, or to allow it to grow to fill any remaining hard drive space available. If you choose Fill all space up to (MB) , you must give size constraints in the field to the right of this option. This allows you to keep a certain amount of space free on your hard drive for future use. Force to be a primary partition : Select whether the partition you are creating should be one of the first four partitions on the hard drive. If unselected, the partition is created as a logical partition. Refer to Section A.1.3, "Partitions Within Partitions - An Overview of Extended Partitions" , for more information. Encrypt : Choose whether to encrypt the partition so that the data stored on it cannot be accessed without a passphrase, even if the storage device is connected to another system. Refer to Appendix C, Disk Encryption for information on encryption of storage devices. If you select this option, the installer prompts you to provide a passphrase before it writes the partition to the disk. OK : Select OK once you are satisfied with the settings and wish to create the partition. Cancel : Select Cancel if you do not want to create the partition. 23.15.2.1. File System Types Red Hat Enterprise Linux allows you to create different partition types and file systems. The following is a brief description of the different partition types and file systems available, and how they can be used. Partition types standard partition - A standard partition can contain a file system or swap space, or it can provide a container for software RAID or an LVM physical volume. swap - Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. Refer to the Red Hat Enterprise Linux Deployment Guide for additional information. software RAID - Creating two or more software RAID partitions allows you to create a RAID device. For more information regarding RAID, refer to the chapter RAID (Redundant Array of Independent Disks) in the Red Hat Enterprise Linux Deployment Guide . physical volume (LVM) - Creating one or more physical volume (LVM) partitions allows you to create an LVM logical volume. LVM can improve performance when using physical disks. For more information regarding LVM, refer to the Red Hat Enterprise Linux Deployment Guide . File systems ext4 - The ext4 file system is based on the ext3 file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. A maximum file system size of 16TB is supported for ext4. The ext4 file system is selected by default and is highly recommended. Note The mount options user_xattr and acl are automatically set on ext4 systems by the installation system. These options enable extended attributes and access control lists, respectively. More information about mount options can be found in the Red Hat Enterprise Linux Storage Administration Guide . ext3 - The ext3 file system is based on the ext2 file system and has one main advantage - journaling. Using a journaling file system reduces time spent recovering a file system after a crash as there is no need to fsck [12] the file system. A maximum file system size of 16TB is supported for ext3. ext2 - An ext2 file system supports standard Unix file types (regular files, directories, symbolic links, etc). It provides the ability to assign long file names, up to 255 characters. xfs - XFS is a highly scalable, high-performance file system that supports filesystems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes) and directory structures containing tens of millions of entries. XFS supports metadata journaling, which facilitates quicker crash recovery. The XFS file system can also be defragmented and resized while mounted and active. Important Red Hat Enterprise Linux 6.9 does not support XFS on System z. vfat - The VFAT file system is a Linux file system that is compatible with Microsoft Windows long filenames on the FAT file system. Btrfs - Btrfs is under development as a file system capable of addressing and managing more files, larger files, and larger volumes than the ext2, ext3, and ext4 file systems. Btrfs is designed to make the file system tolerant of errors, and to facilitate the detection and repair of errors when they occur. It uses checksums to ensure the validity of data and metadata, and maintains snapshots of the file system that can be used for backup or repair. Because Btrfs is still experimental and under development, the installation program does not offer it by default. If you want to create a Btrfs partition on a drive, you must commence the installation process with the boot option btrfs . Refer to Chapter 28, Boot Options for instructions. Warning Red Hat Enterprise Linux 6.9 includes Btrfs as a technology preview to allow you to experiment with this file system. You should not choose Btrfs for partitions that will contain valuable data or that are essential for the operation of important systems. [12] The fsck application is used to check the file system for metadata consistency and optionally repair one or more Linux file systems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/adding_partitions-s390
Chapter 7. Configuring physical switches for OpenStack Networking
Chapter 7. Configuring physical switches for OpenStack Networking This chapter documents the common physical switch configuration steps required for OpenStack Networking. Vendor-specific configuration is included for certain switches. 7.1. Planning your physical network environment The physical network adapters in your OpenStack nodes carry different types of network traffic, such as instance traffic, storage data, or authentication requests. The type of traffic these NICs carry affects how you must configure the ports on the physical switch. First, you must decide which physical NICs oFn your Compute node you want to carry which types of traffic. Then, when the NIC is cabled to a physical switch port, you must configure the switch port to allow trunked or general traffic. For example, the following diagram depicts a Compute node with two NICs, eth0 and eth1. Each NIC is cabled to a Gigabit Ethernet port on a physical switch, with eth0 carrying instance traffic, and eth1 providing connectivity for OpenStack services: Figure 7.1. Sample network layout Note This diagram does not include any additional redundant NICs required for fault tolerance. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.2. Configuring a Cisco Catalyst switch 7.2.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.2.2. Configuring trunk ports for a Cisco Catalyst switch If using a Cisco Catalyst switch running Cisco IOS, you might use the following configuration syntax to allow traffic for VLANs 110 and 111 to pass through to your instances. This configuration assumes that your physical node has an ethernet cable connected to interface GigabitEthernet1/0/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Use the following list to understand these parameters: Field Description interface GigabitEthernet1/0/12 The switch port that the NIC of the X node connects to. Ensure that you replace the GigabitEthernet1/0/12 value with the correct port value for your environment. Use the show interface command to view a list of ports. description Trunk to Compute Node A unique and descriptive value that you can use to identify this interface. spanning-tree portfast trunk If your environment uses STP, set this value to instruct Port Fast that this port is used to trunk traffic. switchport trunk encapsulation dot1q Enables the 802.1q trunking standard (rather than ISL). This value varies depending on the configuration that your switch supports. switchport mode trunk Configures this port as a trunk port, rather than an access port, meaning that it allows VLAN traffic to pass through to the virtual switches. switchport trunk native vlan 2 Set a native VLAN to instruct the switch where to send untagged (non-VLAN) traffic. switchport trunk allowed vlan 2,110,111 Defines which VLANs are allowed through the trunk. 7.2.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.2.4. Configuring access ports for a Cisco Catalyst switch Using the example from the Figure 7.1, "Sample network layout" diagram, GigabitEthernet1/0/13 (on a Cisco Catalyst switch) is configured as an access port for eth1 . In this configuration,your physical node has an ethernet cable connected to interface GigabitEthernet1/0/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. These settings are described below: Field Description interface GigabitEthernet1/0/13 The switch port that the NIC of the X node connects to. Ensure that you replace the GigabitEthernet1/0/12 value with the correct port value for your environment. Use the show interface command to view a list of ports. description Access port for Compute Node A unique and descriptive value that you can use to identify this interface. switchport mode access Configures this port as an access port, rather than a trunk port. switchport access vlan 200 Configures the port to allow traffic on VLAN 200. You must configure your Compute node with an IP address from this VLAN. spanning-tree portfast If using STP, set this value to instruct STP not to attempt to initialize this as a trunk, allowing for quicker port handshakes during initial connections (such as server reboot). 7.2.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.2.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.2.7. Configuring LACP for a Cisco Catalyst switch In this example, the Compute node has two NICs using VLAN 100: Procedure Physically connect both NICs on the Compute node to the switch (for example, ports 12 and 13). Create the LACP port channel: Configure switch ports 12 (Gi1/0/12) and 13 (Gi1/0/13): Review your new port channel. The resulting output lists the new port-channel Po1 , with member ports Gi1/0/12 and Gi1/0/13 : Note Remember to apply your changes by copying the running-config to the startup-config: copy running-config startup-config . 7.2.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.2.9. Configuring MTU settings for a Cisco Catalyst switch Complete the steps in this example procedure to enable jumbo frames on your Cisco Catalyst 3750 switch. Review the current MTU settings: MTU settings are changed switch-wide on 3750 switches, and not for individual interfaces. Run the following commands to configure the switch to use jumbo frames of 9000 bytes. You might prefer to configure the MTU settings for individual interfaces, if your switch supports this feature. Note Remember to save your changes by copying the running-config to the startup-config: copy running-config startup-config . Reload the switch to apply the change. Important Reloading the switch causes a network outage for any devices that are dependent on the switch. Therefore, reload the switch only during a scheduled maintenance period. After the switch reloads, confirm the new jumbo MTU size. The exact output may differ depending on your switch model. For example, System MTU might apply to non-Gigabit interfaces, and Jumbo MTU might describe all Gigabit interfaces. 7.2.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.2.11. Configuring LLDP for a Cisco Catalyst switch Procedure Run the lldp run command to enable LLDP globally on your Cisco Catalyst switch: View any neighboring LLDP-compatible devices: Note Remember to save your changes by copying the running-config to the startup-config: copy running-config startup-config . 7.3. Configuring a Cisco Nexus switch 7.3.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.3.2. Configuring trunk ports for a Cisco Nexus switch If using a Cisco Nexus you might use the following configuration syntax to allow traffic for VLANs 110 and 111 to pass through to your instances. This configuration assumes that your physical node has an ethernet cable connected to interface Ethernet1/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. 7.3.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.3.4. Configuring access ports for a Cisco Nexus switch Procedure Using the example from the Figure 7.1, "Sample network layout" diagram, Ethernet1/13 (on a Cisco Nexus switch) is configured as an access port for eth1 . This configuration assumes that your physical node has an ethernet cable connected to interface Ethernet1/13 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. 7.3.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.3.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.3.7. Configuring LACP for a Cisco Nexus switch In this example, the Compute node has two NICs using VLAN 100: Procedure Physically connect the Compute node NICs to the switch (for example, ports 12 and 13). Confirm that LACP is enabled: Configure ports 1/12 and 1/13 as access ports, and as members of a channel group. Depending on your deployment, you can deploy trunk interfaces rather than access interfaces. For example, for Cisco UCI the NICs are virtual interfaces, so you might prefer to configure access ports exclusively. Often these interfaces contain VLAN tagging configurations. Note When you use PXE to provision nodes on Cisco switches, you might need to set the options no lacp graceful-convergence and no lacp suspend-individual to bring up the ports and boot the server. For more information, see your Cisco switch documentation. 7.3.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.3.9. Configuring MTU settings for a Cisco Nexus 7000 switch Apply MTU settings to a single interface on 7000-series switches. Procedure Run the following commands to configure interface 1/12 to use jumbo frames of 9000 bytes: 7.3.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.3.11. Configuring LLDP for a Cisco Nexus 7000 switch Procedure You can enable LLDP for individual interfaces on Cisco Nexus 7000-series switches: Note Remember to save your changes by copying the running-config to the startup-config: copy running-config startup-config . 7.4. Configuring a Cumulus Linux switch 7.4.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.4.2. Configuring trunk ports for a Cumulus Linux switch This configuration assumes that your physical node has transceivers connected to switch ports swp1 and swp2 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure Use the following configuration syntax to allow traffic for VLANs 100 and 200 to pass through to your instances. 7.4.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.4.4. Configuring access ports for a Cumulus Linux switch This configuration assumes that your physical node has an ethernet cable connected to the interface on the physical switch. Cumulus Linux switches use eth for management interfaces and swp for access/trunk ports. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure Using the example from the Figure 7.1, "Sample network layout" diagram, swp1 (on a Cumulus Linux switch) is configured as an access port. 7.4.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.4.6. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.4.7. Configuring MTU settings for a Cumulus Linux switch Procedure This example enables jumbo frames on your Cumulus Linux switch. Note Remember to apply your changes by reloading the updated configuration: sudo ifreload -a 7.4.8. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.4.9. Configuring LLDP for a Cumulus Linux switch By default, the LLDP service lldpd runs as a daemon and starts when the switch boots. Procedure To view all LLDP neighbors on all ports/interfaces, run the following command: 7.5. Configuring a Extreme Exos switch 7.5.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.5.2. Configuring trunk ports on an Extreme Networks EXOS switch If using an X-670 series switch, refer to the following example to allow traffic for VLANs 110 and 111 to pass through to your instances. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure This configuration assumes that your physical node has an ethernet cable connected to interface 24 on the physical switch. In this example, DATA and MNGT are the VLAN names. 7.5.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.5.4. Configuring access ports for an Extreme Networks EXOS switch This configuration assumes that your physical node has an ethernet cable connected to interface 10 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure In this configuration example, on a Extreme Networks X-670 series switch, 10 is used as an access port for eth1 . For example: 7.5.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.5.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.5.7. Configuring LACP on an Extreme Networks EXOS switch Procedure In this example, the Compute node has two NICs using VLAN 100: For example: Note You might need to adjust the timeout period in the LACP negotiation script. For more information, see https://gtacknowledge.extremenetworks.com/articles/How_To/LACP-configured-ports-interfere-with-PXE-DHCP-on-servers 7.5.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.5.9. Configuring MTU settings on an Extreme Networks EXOS switch Procedure Run the commands in this example to enable jumbo frames on an Extreme Networks EXOS switch and configure support for forwarding IP packets with 9000 bytes: Example 7.5.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.5.11. Configuring LLDP settings on an Extreme Networks EXOS switch Procedure In this example, LLDP is enabled on an Extreme Networks EXOS switch. 11 represents the port string: 7.6. Configuring a Juniper EX Series switch 7.6.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.6.2. Configuring trunk ports for a Juniper EX Series switch Procedure If using a Juniper EX series switch running Juniper JunOS, use the following configuration syntax to allow traffic for VLANs 110 and 111 to pass through to your instances. This configuration assumes that your physical node has an ethernet cable connected to interface ge-1/0/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. 7.6.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.6.4. Configuring access ports for a Juniper EX Series switch This example on, a Juniper EX series switch, shows ge-1/0/13 as an access port for eth1 . + Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure This configuration assumes that your physical node has an ethernet cable connected to interface ge-1/0/13 on the physical switch. + 7.6.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.6.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Director Installation and Usage guide. 7.6.7. Configuring LACP for a Juniper EX Series switch In this example, the Compute node has two NICs using VLAN 100. Procedure Physically connect the Compute node's two NICs to the switch (for example, ports 12 and 13). Create the port aggregate: Configure switch ports 12 (ge-1/0/12) and 13 (ge-1/0/13) to join the port aggregate ae1 : Note For Red Hat OpenStack Platform director deployments, in order to PXE boot from the bond, you must configure one of the bond members as lacp force-up toensure that only one bond member comes up during introspection and first boot. The bond member that you configure with lacp force-up must be the same bond member that has the MAC address in instackenv.json (the MAC address known to ironic must be the same MAC address configured with force-up). Enable LACP on port aggregate ae1 : Add aggregate ae1 to VLAN 100: Review your new port channel. The resulting output lists the new port aggregate ae1 with member ports ge-1/0/12 and ge-1/0/13 : Note Remember to apply your changes by running the commit command. 7.6.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.6.9. Configuring MTU settings for a Juniper EX Series switch This example enables jumbo frames on your Juniper EX4200 switch. Note The MTU value is calculated differently depending on whether you are using Juniper or Cisco devices. For example, 9216 on Juniper would equal to 9202 for Cisco. The extra bytes are used for L2 headers, where Cisco adds this automatically to the MTU value specified, but the usable MTU will be 14 bytes smaller than specified when using Juniper. So in order to support an MTU of 9000 on the VLANs, the MTU of 9014 would have to be configured on Juniper. Procedure For Juniper EX series switches, MTU settings are set for individual interfaces. These commands configure jumbo frames on the ge-1/0/14 and ge-1/0/15 ports: Note Remember to save your changes by running the commit command. If using a LACP aggregate, you will need to set the MTU size there, and not on the member NICs. For example, this setting configures the MTU size for the ae1 aggregate: 7.6.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.6.11. Configuring LLDP for a Juniper EX Series switch You can enable LLDP globally for all interfaces, or just for individual ones. Procedure Use the following too enable LLDP globally on your Juniper EX 4200 switch: Use the following to enable LLDP for the single interface ge-1/0/14 : Note Remember to apply your changes by running the commit command.
[ "interface GigabitEthernet1/0/12 description Trunk to Compute Node spanning-tree portfast trunk switchport trunk encapsulation dot1q switchport mode trunk switchport trunk native vlan 2 switchport trunk allowed vlan 2,110,111", "interface GigabitEthernet1/0/13 description Access port for Compute Node switchport mode access switchport access vlan 200 spanning-tree portfast", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "interface port-channel1 switchport access vlan 100 switchport mode access spanning-tree guard root", "sw01# config t Enter configuration commands, one per line. End with CNTL/Z. sw01(config) interface GigabitEthernet1/0/12 switchport access vlan 100 switchport mode access speed 1000 duplex full channel-group 10 mode active channel-protocol lacp interface GigabitEthernet1/0/13 switchport access vlan 100 switchport mode access speed 1000 duplex full channel-group 10 mode active channel-protocol lacp", "sw01# show etherchannel summary <snip> Number of channel-groups in use: 1 Number of aggregators: 1 Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 1 Po1(SD) LACP Gi1/0/12(D) Gi1/0/13(D)", "sw01# show system mtu System MTU size is 1600 bytes System Jumbo MTU size is 1600 bytes System Alternate MTU size is 1600 bytes Routing MTU size is 1600 bytes", "sw01# config t Enter configuration commands, one per line. End with CNTL/Z. sw01(config)# system mtu jumbo 9000 Changes to the system jumbo MTU will not take effect until the next reload is done", "sw01# reload Proceed with reload? [confirm]", "sw01# show system mtu System MTU size is 1600 bytes System Jumbo MTU size is 9000 bytes System Alternate MTU size is 1600 bytes Routing MTU size is 1600 bytes", "sw01# config t Enter configuration commands, one per line. End with CNTL/Z. sw01(config)# lldp run", "sw01# show lldp neighbor Capability codes: (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other Device ID Local Intf Hold-time Capability Port ID DEP42037061562G3 Gi1/0/11 180 B,T 422037061562G3:P1 Total entries displayed: 1", "interface Ethernet1/12 description Trunk to Compute Node switchport mode trunk switchport trunk allowed vlan 2,110,111 switchport trunk native vlan 2 end", "interface Ethernet1/13 description Access port for Compute Node switchport mode access switchport access vlan 200", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "(config)# show feature | include lacp lacp 1 enabled", "interface Ethernet1/13 description Access port for Compute Node switchport mode access switchport access vlan 200 channel-group 10 mode active interface Ethernet1/13 description Access port for Compute Node switchport mode access switchport access vlan 200 channel-group 10 mode active", "interface ethernet 1/12 mtu 9216 exit", "interface ethernet 1/12 lldp transmit lldp receive no lacp suspend-individual no lacp graceful-convergence interface ethernet 1/13 lldp transmit lldp receive no lacp suspend-individual no lacp graceful-convergence", "auto bridge iface bridge bridge-vlan-aware yes bridge-ports glob swp1-2 bridge-vids 100 200", "auto bridge iface bridge bridge-vlan-aware yes bridge-ports glob swp1-2 bridge-vids 100 200 auto swp1 iface swp1 bridge-access 100 auto swp2 iface swp2 bridge-access 200", "auto swp1 iface swp1 mtu 9000", "cumulus@switchUSD netshow lldp Local Port Speed Mode Remote Port Remote Host Summary ---------- --- --------- ----- ----- ----------- -------- eth0 10G Mgmt ==== swp6 mgmt-sw IP: 10.0.1.11/24 swp51 10G Interface/L3 ==== swp1 spine01 IP: 10.0.0.11/32 swp52 10G Interface/L ==== swp1 spine02 IP: 10.0.0.11/32", "#create vlan DATA tag 110 #create vlan MNGT tag 111 #configure vlan DATA add ports 24 tagged #configure vlan MNGT add ports 24 tagged", "create vlan VLANNAME tag NUMBER configure vlan Default delete ports PORTSTRING configure vlan VLANNAME add ports PORTSTRING untagged", "#create vlan DATA tag 110 #configure vlan Default delete ports 10 #configure vlan DATA add ports 10 untagged", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "enable sharing MASTERPORT grouping ALL_LAG_PORTS lacp configure vlan VLANNAME add ports PORTSTRING tagged", "#enable sharing 11 grouping 11,12 lacp #configure vlan DATA add port 11 untagged", "enable jumbo-frame ports PORTSTRING configure ip-mtu 9000 vlan VLANNAME", "enable jumbo-frame ports 11 configure ip-mtu 9000 vlan DATA", "enable lldp ports 11", "ge-1/0/12 { description Trunk to Compute Node; unit 0 { family ethernet-switching { port-mode trunk; vlan { members [110 111]; } native-vlan-id 2; } } }", "ge-1/0/13 { description Access port for Compute Node unit 0 { family ethernet-switching { port-mode access; vlan { members 200; } native-vlan-id 2; } } }", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "chassis { aggregated-devices { ethernet { device-count 1; } } }", "interfaces { ge-1/0/12 { gigether-options { 802.3ad ae1; } } ge-1/0/13 { gigether-options { 802.3ad ae1; } } }", "interfaces { ae1 { aggregated-ether-options { lacp { active; } } } }", "interfaces { ae1 { vlan-tagging; native-vlan-id 2; unit 100 { vlan-id 100; } } }", "> show lacp statistics interfaces ae1 Aggregated interface: ae1 LACP Statistics: LACP Rx LACP Tx Unknown Rx Illegal Rx ge-1/0/12 0 0 0 0 ge-1/0/13 0 0 0 0", "set interfaces ge-1/0/14 mtu 9216 set interfaces ge-1/0/15 mtu 9216", "set interfaces ae1 mtu 9216", "lldp { interface all{ enable; } } }", "lldp { interface ge-1/0/14{ enable; } } }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/config-physical-switch-osp-network_rhosp-network
Chapter 3. Kafka broker configuration tuning
Chapter 3. Kafka broker configuration tuning Use configuration properties to optimize the performance of Kafka brokers. You can use standard Kafka broker configuration options, except for properties managed directly by Streams for Apache Kafka. 3.1. Basic broker configuration A typical broker configuration will include settings for properties related to topics, threads and logs. Basic broker configuration properties # ... num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000 # ... 3.2. Replicating topics for high availability Basic topic properties set the default number of partitions and replication factor for topics, which will apply to topics that are created without these properties being explicitly set, including when topics are created automatically. # ... num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576 # ... For high availability environments, it is advisable to increase the replication factor to at least 3 for topics and set the minimum number of in-sync replicas required to 1 less than the replication factor. The auto.create.topics.enable property is enabled by default so that topics that do not already exist are created automatically when needed by producers and consumers. If you are using automatic topic creation, you can set the default number of partitions for topics using num.partitions . Generally, however, this property is disabled so that more control is provided over topics through explicit topic creation. For data durability , you should also set min.insync.replicas in your topic configuration and message delivery acknowledgments using acks=all in your producer configuration. Use replica.fetch.max.bytes to set the maximum size, in bytes, of messages fetched by each follower that replicates the leader partition. Change this value according to the average message size and throughput. When considering the total memory allocation required for read/write buffering, the memory available must also be able to accommodate the maximum replicated message size when multiplied by all followers. The delete.topic.enable property is enabled by default to allow topics to be deleted. In a production environment, you should disable this property to avoid accidental topic deletion, resulting in data loss. You can, however, temporarily enable it and delete topics and then disable it again. Note When running Streams for Apache Kafka on OpenShift, the Topic Operator can provide operator-style topic management. You can use the KafkaTopic resource to create topics. For topics created using the KafkaTopic resource, the replication factor is set using spec.replicas . If delete.topic.enable is enabled, you can also delete topics using the KafkaTopic resource. # ... auto.create.topics.enable=false delete.topic.enable=true # ... 3.3. Internal topic settings for transactions and commits If you are using transactions to enable atomic writes to partitions from producers, the state of the transactions is stored in the internal __transaction_state topic. By default, the brokers are configured with a replication factor of 3 and a minimum of 2 in-sync replicas for this topic, which means that a minimum of three brokers are required in your Kafka cluster. # ... transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 # ... Similarly, the internal __consumer_offsets topic, which stores consumer state, has default settings for the number of partitions and replication factor. # ... offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 # ... Do not reduce these settings in production. You can increase the settings in a production environment. As an exception, you might want to reduce the settings in a single-broker test environment. 3.4. Improving request handling throughput by increasing I/O threads Network threads handle requests to the Kafka cluster, such as produce and fetch requests from client applications. Produce requests are placed in a request queue. Responses are placed in a response queue. The number of network threads per listener should reflect the replication factor and the levels of activity from client producers and consumers interacting with the Kafka cluster. If you are going to have a lot of requests, you can increase the number of threads, using the amount of time threads are idle to determine when to add more threads. To reduce congestion and regulate the request traffic, you can limit the number of requests allowed in the request queue. When the request queue is full, all incoming traffic is blocked. I/O threads pick up requests from the request queue to process them. Adding more threads can improve throughput, but the number of CPU cores and disk bandwidth imposes a practical upper limit. At a minimum, the number of I/O threads should equal the number of storage volumes. # ... num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=4 4 # ... 1 The number of network threads for the Kafka cluster. 2 The number of requests allowed in the request queue. 3 The number of I/O threads for a Kafka broker. 4 The number of threads used for log loading at startup and flushing at shutdown. Try setting to a value of at least the number of cores. Configuration updates to the thread pools for all brokers might occur dynamically at the cluster level. These updates are restricted to between half the current size and twice the current size. Tip The following Kafka broker metrics can help with working out the number of threads required: kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent provides metrics on the average time network threads are idle as a percentage. kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent provides metrics on the average time I/O threads are idle as a percentage. If there is 0% idle time, all resources are in use, which means that adding more threads might be beneficial. When idle time goes below 30%, performance may start to suffer. If threads are slow or limited due to the number of disks, you can try increasing the size of the buffers for network requests to improve throughput: # ... replica.socket.receive.buffer.bytes=65536 # ... And also increase the maximum number of bytes Kafka can receive: # ... socket.request.max.bytes=104857600 # ... 3.5. Increasing bandwidth for high latency connections Kafka batches data to achieve reasonable throughput over high-latency connections from Kafka to clients, such as connections between datacenters. However, if high latency is a problem, you can increase the size of the buffers for sending and receiving messages. # ... socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 # ... You can estimate the optimal size of your buffers using a bandwidth-delay product calculation, which multiplies the maximum bandwidth of the link (in bytes/s) with the round-trip delay (in seconds) to give an estimate of how large a buffer is required to sustain maximum throughput. 3.6. Managing Kafka logs with delete and compact policies Kafka relies on logs to store message data. A log consists of a series of segments, where each segment is associated with offset-based and timestamp-based indexes. New messages are written to an active segment and are never subsequently modified. When serving fetch requests from consumers, the segments are read. Periodically, the active segment is rolled to become read-only, and a new active segment is created to replace it. There is only one active segment per topic-partition per broker. Older segments are retained until they become eligible for deletion. Configuration at the broker level determines the maximum size in bytes of a log segment and the time in milliseconds before an active segment is rolled: # ... log.segment.bytes=1073741824 log.roll.ms=604800000 # ... These settings can be overridden at the topic level using segment.bytes and segment.ms . The choice to lower or raise these values depends on the policy for segment deletion. A larger size means the active segment contains more messages and is rolled less often. Segments also become eligible for deletion less frequently. In Kafka, log cleanup policies determine how log data is managed. In most cases, you won't need to change the default configuration at the cluster level, which specifies the delete cleanup policy and enables the log cleaner used by the compact cleanup policy: # ... log.cleanup.policy=delete log.cleaner.enable=true # ... Delete cleanup policy Delete cleanup policy is the default cluster-wide policy for all topics. The policy is applied to topics that do not have a specific topic-level policy configured. Kafka removes older segments based on time-based or size-based log retention limits. Compact cleanup policy Compact cleanup policy is generally configured as a topic-level policy ( cleanup.policy=compact ). Kafka's log cleaner applies compaction on specific topics, retaining only the most recent value for a key in the topic. You can also configure topics to use both policies ( cleanup.policy=compact,delete ). Setting up retention limits for the delete policy Delete cleanup policy corresponds to managing logs with data retention. The policy is suitable when data does not need to be retained forever. You can establish time-based or size-based log retention and cleanup policies to keep logs bounded. When log retention policies are employed, non-active log segments are removed when retention limits are reached. Deletion of old segments helps to prevent exceeding disk capacity. For time-based log retention, you set a retention period based on hours, minutes, or milliseconds: # ... log.retention.ms=1680000 # ... The retention period is based on the time messages were appended to the segment. Kafka uses the timestamp of the latest message within a segment to determine if that segment has expired or not. The milliseconds configuration has priority over minutes, which has priority over hours. The minutes and milliseconds configurations are null by default, but the three options provide a substantial level of control over the data you wish to retain. Preference should be given to the milliseconds configuration, as it is the only one of the three properties that is dynamically updateable. If log.retention.ms is set to -1, no time limit is applied to log retention, and all logs are retained. However, this setting is not generally recommended as it can lead to issues with full disks that are difficult to rectify. For size-based log retention, you specify a minimum log size (in bytes): # ... log.retention.bytes=1073741824 # ... This means that Kafka will ensure there is always at least the specified amount of log data available. For example, if you set log.retention.bytes to 1000 and log.segment.bytes to 300, Kafka will keep 4 segments plus the active segment, ensuring a minimum of 1000 bytes are available. When the active segment becomes full and a new segment is created, the oldest segment is deleted. At this point, the size on disk may exceed the specified 1000 bytes, potentially ranging between 1200 and 1500 bytes (excluding index files). A potential issue with using a log size is that it does not take into account the time messages were appended to a segment. You can use time-based and size-based log retention for your cleanup policy to get the balance you need. Whichever threshold is reached first triggers the cleanup. To add a time delay before a segment file is deleted from the system, you can use log.segment.delete.delay.ms at the broker level for all topics: # ... log.segment.delete.delay.ms=60000 # ... Or configure file.delete.delay.ms at the topic level. You set the frequency at which the log is checked for cleanup in milliseconds: # ... log.retention.check.interval.ms=300000 # ... Adjust the log retention check interval in relation to the log retention settings. Smaller retention sizes might require more frequent checks. The frequency of cleanup should be often enough to manage the disk space but not so often it affects performance on a broker. Retaining the most recent messages using compact policy When you enable log compaction for a topic by setting cleanup.policy=compact , Kafka uses the log cleaner as a background thread to perform the compaction. The compact policy guarantees that the most recent message for each message key is retained, effectively cleaning up older versions of records. The policy is suitable when message values are changeable, and you want to retain the latest update. If a cleanup policy is set for log compaction, the head of the log operates as a standard Kafka log, with writes for new messages appended in order. In the tail of a compacted log, where the log cleaner operates, records are deleted if another record with the same key occurs later in the log. Messages with null values are also deleted. To use compaction, you must have keys to identify related messages because Kafka guarantees that the latest messages for each key will be retained, but it does not guarantee that the whole compacted log will not contain duplicates. Figure 3.1. Log showing key value writes with offset positions before compaction Using keys to identify messages, Kafka compaction keeps the latest message (with the highest offset) that is present in the log tail for a specific message key, eventually discarding earlier messages that have the same key. The message in its latest state is always available, and any out-of-date records of that particular message are eventually removed when the log cleaner runs. You can restore a message back to a state. Records retain their original offsets even when surrounding records get deleted. Consequently, the tail can have non-contiguous offsets. When consuming an offset that's no longer available in the tail, the record with the higher offset is found. Figure 3.2. Log after compaction If appropriate, you can add a delay to the compaction process: # ... log.cleaner.delete.retention.ms=86400000 # ... The deleted data retention period gives time to notice the data is gone before it is irretrievably deleted. To delete all messages related to a specific key, a producer can send a tombstone message. A tombstone has a null value and acts as a marker to inform consumers that the corresponding message for that key has been deleted. After some time, only the tombstone marker is retained. Assuming new messages continue to come in, the marker is retained for a duration specified by log.cleaner.delete.retention.ms to allow consumers enough time to recognize the deletion. You can also set a time in milliseconds to put the cleaner on standby if there are no logs to clean: # ... log.cleaner.backoff.ms=15000 # ... Using combined compact and delete policies If you choose only a compact policy, your log can still become arbitrarily large. In such cases, you can set the cleanup policy for a topic to compact and delete logs. Kafka applies log compaction, removing older versions of records and retaining only the latest version of each key. Kafka also deletes records based on the specified time-based or size-based log retention settings. For example, in the following diagram only the latest message (with the highest offset) for a specific message key is retained up to the compaction point. If there are any records remaining up to the retention point they are deleted. In this case, the compaction process would remove all duplicates. Figure 3.3. Log retention point and compaction point 3.7. Managing efficient disk utilization for compaction When employing the compact policy and log cleaner to handle topic logs in Kafka, consider optimizing memory allocation. You can fine-tune memory allocation using the deduplication property ( dedupe.buffer.size ), which determines the total memory allocated for cleanup tasks across all log cleaner threads. Additionally, you can establish a maximum memory usage limit by defining a percentage through the buffer.load.factor property. # ... log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9 # ... Each log entry uses exactly 24 bytes, so you can work out how many log entries the buffer can handle in a single run and adjust the setting accordingly. If possible, consider increasing the number of log cleaner threads if you are looking to reduce the log cleaning time: # ... log.cleaner.threads=8 # ... If you are experiencing issues with 100% disk bandwidth usage, you can throttle the log cleaner I/O so that the sum of the read/write operations is less than a specified double value based on the capabilities of the disks performing the operations: # ... log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 # ... 3.8. Controlling the log flush of message data Generally, the recommendation is to not set explicit flush thresholds and let the operating system perform background flush using its default settings. Partition replication provides greater data durability than writes to any single disk, as a failed broker can recover from its in-sync replicas. Log flush properties control the periodic writes of cached message data to disk. The scheduler specifies the frequency of checks on the log cache in milliseconds: # ... log.flush.scheduler.interval.ms=2000 # ... You can control the frequency of the flush based on the maximum amount of time that a message is kept in-memory and the maximum number of messages in the log before writing to disk: # ... log.flush.interval.ms=50000 log.flush.interval.messages=100000 # ... The wait between flushes includes the time to make the check and the specified interval before the flush is carried out. Increasing the frequency of flushes can affect throughput. If you are using application flush management, setting lower flush thresholds might be appropriate if you are using faster disks. 3.9. Partition rebalancing for availability Partitions can be replicated across brokers for fault tolerance. For a given partition, one broker is elected leader and handles all produce requests (writes to the log). Partition followers on other brokers replicate the partition data of the partition leader for data reliability in the event of the leader failing. Followers do not normally serve clients, though rack configuration allows a consumer to consume messages from the closest replica when a Kafka cluster spans multiple datacenters. Followers operate only to replicate messages from the partition leader and allow recovery should the leader fail. Recovery requires an in-sync follower. Followers stay in sync by sending fetch requests to the leader, which returns messages to the follower in order. The follower is considered to be in sync if it has caught up with the most recently committed message on the leader. The leader checks this by looking at the last offset requested by the follower. An out-of-sync follower is usually not eligible as a leader should the current leader fail, unless unclean leader election is allowed . You can adjust the lag time before a follower is considered out of sync: # ... replica.lag.time.max.ms=30000 # ... Lag time puts an upper limit on the time to replicate a message to all in-sync replicas and how long a producer has to wait for an acknowledgment. If a follower fails to make a fetch request and catch up with the latest message within the specified lag time, it is removed from in-sync replicas. You can reduce the lag time to detect failed replicas sooner, but by doing so you might increase the number of followers that fall out of sync needlessly. The right lag time value depends on both network latency and broker disk bandwidth. When a leader partition is no longer available, one of the in-sync replicas is chosen as the new leader. The first broker in a partition's list of replicas is known as the preferred leader. By default, Kafka is enabled for automatic partition leader rebalancing based on a periodic check of leader distribution. That is, Kafka checks to see if the preferred leader is the current leader. A rebalance ensures that leaders are evenly distributed across brokers and brokers are not overloaded. You can use Cruise Control for Streams for Apache Kafka to figure out replica assignments to brokers that balance load evenly across the cluster. Its calculation takes into account the differing load experienced by leaders and followers. A failed leader affects the balance of a Kafka cluster because the remaining brokers get the extra work of leading additional partitions. For the assignment found by Cruise Control to actually be balanced it is necessary that partitions are lead by the preferred leader. Kafka can automatically ensure that the preferred leader is being used (where possible), changing the current leader if necessary. This ensures that the cluster remains in the balanced state found by Cruise Control. You can control the frequency, in seconds, of the rebalance check and the maximum percentage of imbalance allowed for a broker before a rebalance is triggered. #... auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #... The percentage leader imbalance for a broker is the ratio between the current number of partitions for which the broker is the current leader and the number of partitions for which it is the preferred leader. You can set the percentage to zero to ensure that preferred leaders are always elected, assuming they are in sync. If the checks for rebalances need more control, you can disable automated rebalances. You can then choose when to trigger a rebalance using the kafka-leader-election.sh command line tool. Note The Grafana dashboards provided with Streams for Apache Kafka show metrics for under-replicated partitions and partitions that do not have an active leader. 3.10. Unclean leader election Leader election to an in-sync replica is considered clean because it guarantees no loss of data. And this is what happens by default. But what if there is no in-sync replica to take on leadership? Perhaps the ISR (in-sync replica) only contained the leader when the leader's disk died. If a minimum number of in-sync replicas is not set, and there are no followers in sync with the partition leader when its hard drive fails irrevocably, data is already lost. Not only that, but a new leader cannot be elected because there are no in-sync followers. You can configure how Kafka handles leader failure: # ... unclean.leader.election.enable=false # ... Unclean leader election is disabled by default, which means that out-of-sync replicas cannot become leaders. With clean leader election, if no other broker was in the ISR when the old leader was lost, Kafka waits until that leader is back online before messages can be written or read. Unclean leader election means out-of-sync replicas can become leaders, but you risk losing messages. The choice you make depends on whether your requirements favor availability or durability. You can override the default configuration for specific topics at the topic level. If you cannot afford the risk of data loss, then leave the default configuration. 3.11. Avoiding unnecessary consumer group rebalances For consumers joining a new consumer group, you can add a delay so that unnecessary rebalances to the broker are avoided: # ... group.initial.rebalance.delay.ms=3000 # ... The delay is the amount of time that the coordinator waits for members to join. The longer the delay, the more likely it is that all the members will join in time and avoid a rebalance. But the delay also prevents the group from consuming until the period has ended.
[ "num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000", "num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576", "auto.create.topics.enable=false delete.topic.enable=true", "transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2", "offsets.topic.num.partitions=50 offsets.topic.replication.factor=3", "num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=4 4", "replica.socket.receive.buffer.bytes=65536", "socket.request.max.bytes=104857600", "socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576", "log.segment.bytes=1073741824 log.roll.ms=604800000", "log.cleanup.policy=delete log.cleaner.enable=true", "log.retention.ms=1680000", "log.retention.bytes=1073741824", "log.segment.delete.delay.ms=60000", "log.retention.check.interval.ms=300000", "log.cleaner.delete.retention.ms=86400000", "log.cleaner.backoff.ms=15000", "log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9", "log.cleaner.threads=8", "log.cleaner.io.max.bytes.per.second=1.7976931348623157E308", "log.flush.scheduler.interval.ms=2000", "log.flush.interval.ms=50000 log.flush.interval.messages=100000", "replica.lag.time.max.ms=30000", "# auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #", "unclean.leader.election.enable=false", "group.initial.rebalance.delay.ms=3000" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_tuning/con-broker-config-properties-str
4.265. Red Hat Enterprise Linux Release Notes
4.265. Red Hat Enterprise Linux Release Notes 4.265.1. RHEA-2011:1773 - Red Hat Enterprise Linux 6.2 Release Notes Updated packages containing the Release Notes for Red Hat Enterprise Linux 6.2 are now available. Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security and bug fix errata. The Red Hat Enterprise Linux 6.2 Release Notes documents the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release. Detailed notes on all changes in this minor release are available in the Technical Notes. Refer to the Online Release Notes for the most up-to-date version of the Red Hat Enterprise Linux 6.2 Release Notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.2_Release_Notes/index.html 4.265.2. RHEA-2011:1543 - Red Hat Enterprise Linux 6.2 Release Notes Updated packages containing the Release Notes for Red Hat Enterprise Linux 6.2 are now available. Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security and bug fix errata. The Red Hat Enterprise Linux 6.2 Release Notes documents the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release. Detailed notes on all changes in this minor release are available in the Technical Notes. Refer to the Online Release Notes for the most up-to-date version of the Red Hat Enterprise Linux 6.2 Release Notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.2_Release_Notes/index.html
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/release_notes
Specialized hardware and driver enablement
Specialized hardware and driver enablement OpenShift Container Platform 4.18 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/specialized_hardware_and_driver_enablement/index
Preface
Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7.7 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.7_release_notes/preface
Getting started with Red Hat Decision Manager
Getting started with Red Hat Decision Manager Red Hat Decision Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_devices/making-open-source-more-inclusive
16.2. Types
16.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. By default, mounted NFS volumes on the client side are labeled with a default context defined by policy for NFS. In common policies, this default context uses the nfs_t type. The root user is able to override the default type using the mount -context option. The following types are used with NFS. Different types allow you to configure flexible access: var_lib_nfs_t This type is used for existing and new files copied to or created in the /var/lib/nfs/ directory. This type should not need to be changed in normal operation. To restore changes to the default settings, run the restorecon -R -v /var/lib/nfs command as the root user. nfsd_exec_t The /usr/sbin/rpc.nfsd file is labeled with the nfsd_exec_t , as are other system executables and libraries related to NFS. Users should not label any files with this type. nfsd_exec_t will transition to nfsd_t .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-nfs-types
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/developing_c_and_cpp_applications_in_rhel_8/proc_providing-feedback-on-red-hat-documentation_developing-applications
Implementing security automation
Implementing security automation Red Hat Ansible Automation Platform 2.5 Identify and manage security events using Ansible Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/implementing_security_automation/index
1.3. keepalived and haproxy
1.3. keepalived and haproxy Administrators can use both Keepalived and HAProxy together for a more robust and scalable high availability environment. Using the speed and scalability of HAProxy to perform load balancing for HTTP and other TCP-based services in conjunction with Keepalived failover services, administrators can increase availability by distributing load across real servers as well as ensuring continuity in the event of router unavailability by performing failover to backup routers.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s2-lvs-keepalived-haproxy-VSA
Chapter 149. KafkaNodePoolTemplate schema reference
Chapter 149. KafkaNodePoolTemplate schema reference Used in: KafkaNodePoolSpec Property Property type Description podSet ResourceTemplate Template for Kafka StrimziPodSet resource. pod PodTemplate Template for Kafka Pods . perPodService ResourceTemplate Template for Kafka per-pod Services used for access from outside of OpenShift. perPodRoute ResourceTemplate Template for Kafka per-pod Routes used for access from outside of OpenShift. perPodIngress ResourceTemplate Template for Kafka per-pod Ingress used for access from outside of OpenShift. persistentVolumeClaim ResourceTemplate Template for all Kafka PersistentVolumeClaims . kafkaContainer ContainerTemplate Template for the Kafka broker container. initContainer ContainerTemplate Template for the Kafka init container.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkanodepooltemplate-reference
Chapter 2. Planning the directory data
Chapter 2. Planning the directory data The directory data can contain user names, email addresses, telephone numbers, user groups, and other information. The types of data you want to store in the directory determines the directory structure, access given to the data, and how this access is requested and granted. 2.1. Introduction to directory data The suitable data for a directory has the following characteristics: The data is read more often than written. The data is expressible in attribute-value format (for example, surname=jensen ). The data is useful for not only one person or a group. For example, several people and applications can use an employee name or a printer location. The data is accessed from more than one physical location. For example, preference settings of an employee for a software application are not good for the directory because only a single instance of the application needs access to the information. However, if the application can read the preference settings from the directory and users want to use the application according to their preferences from different sites, then including such settings in the directory is useful. 2.1.1. Information to include in the directory You can add to an entry useful information about a person or asset as an attribute. For example: Contact information, such as telephone numbers, physical addresses, and email addresses. Descriptive information, such as an employee number, job title, manager or administrator identification, and job-related interests. Organization contact information, such as a telephone number, physical address, administrator identification, and business description. Device information, such as a printer physical location, a printer type, and the number of pages per minute that the printer can produce. Contact and billing information for a corporation trading partners, clients, and customers. Contract information, such as the customer name, due dates, job description, and pricing information. Individual software preferences or software configuration information. Resource sites, such as pointers to web servers or the file system of a certain file or application. Using the Directory Server for purposes beyond server administration requires planning what other types of information to store in the directory. For example, you may include the following information types: Contract or client account details Payroll data Physical device information Home contact information Office contact information of different sites within the enterprise 2.1.2. Information to exclude from the directory Red Hat Directory Server manages well large data volumes that client applications read and occasionally update, but Directory Server is not designed for handling large, unstructured objects, such as images or other media. You should maintain these objects in a file system. However, the directory can store pointers to these types of applications by using FTP, HTTPs, and other URL types. 2.2. Defining directory needs When designing the directory data, you can think not only of the data that is currently required but also how the directory (and organization) is likely to change over time. Considering the future needs of the directory during the design process influences how the data in the directory is structured and distributed. Consider the following points: What do you want to have in the directory today? What immediate problem you want to solve by deploying a directory? What immediate needs of the directory-enabled application that you use? What do you want to add to the directory in the near future? For example, an enterprise uses an accounting package that does not currently support LDAP, however this accounting package will be LDAP-enabled in a few months. Identify the data used by LDAP-compatible applications, and plan for the migration of the data into the directory as the technology becomes available. What information you want to store in the directory in the future? For example, a hosting company can have future customers with different data requirements than the current customers, such as storing images or media files. Planning this way helps you to identify data sources that you have not even considered. 2.3. Performing a site survey A site survey is a formal method for discovering and characterizing the directory contents. Plan more time for performing the survey, because preparation is crucial for the directory architecture. The site survey consists of the following tasks: Identify the applications that use the directory. Determine the directory-enabled applications you deploy across the enterprise and their data needs. Identify data sources. Survey the enterprise and identify data sources, including Active Directory, other LDAP servers, PBX systems, human resources databases, and email systems. Characterize the data the directory needs to contain. Determine what objects should be in the directory (for example, people or groups) and what attributes of these objects to maintain in the directory (such as usernames and passwords). Determine the level of service to provide. Decide the availability of directory data for client applications, and design the architecture accordingly. The directory availability influences how you configure data replication and chaining policies to connect data stored on remote servers. Identify a data supplier. A data supplier contains the primary source for directory data. You may mirror this data to other servers for load balancing and recovery purposes. Determine the data supplier for each piece of data. Determine data ownership. For every piece of data, determine the person responsible for the data update. Determine data access. When importing data from other sources, develop a strategy for both bulk imports and incremental updates. As a part of this strategy, try to manage data in a single place, and restrict the number of applications that can change the data. Also, limit the number of people who write to any given piece of data. Smaller groups ensure data integrity while reducing the administrative overhead. Document the site survey. If the directory affects several organizations by the directory, consider creating a directory deployment team that includes representatives from each affected organization to conduct the site survey. Corporations generally have a human resources department, an accounting or accounts receivable department, manufacturing organizations, sales organizations, and development organizations. Including representatives from each of these organizations can help to perform the survey process and migrate from local data stores to a centralized directory. 2.3.1. Identifying the applications that use the directory The applications that access the directory and the data needs of these applications guide the planning of the directory contents. The various common applications using the directory include: Directory browser applications, such as online telephone books . Decide what information users need, and include it in the directory. Email applications, especially email servers . All email servers require some routing information to be available in the directory. However, some can require more advanced information, such as the place on disk where a user mailbox is stored, vacation notification details, and protocol information, for example, IMAP versus POP. Directory-enabled human resources applications . These require additional personal information such as government identification numbers, home addresses, home telephone numbers, birth dates, salary, and job title. Microsoft Active Directory . Through Windows User Sync, Windows directory services can be integrated to function together with Directory Server. Both directories can store user information and group information. Configure the Directory Server deployment after the existing Windows server deployment so that users, groups, and other directory data can synchronize. When assessing the applications that will use the directory, consider the types of information each application uses. The following table gives an example of applications and the information that the application uses: Table 2.1. Example Application Data Needs Application Class of data Data Phonebook People Name, email address, phone number, user ID, password, department number, manager, mail stop Web server People, groups User ID, password, group name, group members, group owner Calendar server People, meeting rooms Name, user ID, cube number, conference room name When you identify the applications and information that each application uses, you will understand which types of data are used by more than one application. This step in planning can prevent data redundancy in the directory, and show clearly what data directory-dependent applications require. The following factors affect the final decision about the types of data maintained in the directory and when you migrate the information to the directory: The data required by various legacy applications and users The ability of legacy applications to communicate with an LDAP directory 2.3.2. Identifying data sources To determine all the data to include in the directory, perform a survey of the existing data stores. The survey should include the following: Identify organizations that provide information. Locate all the organizations managing crucial information, such as the information services, human resources, payroll, and accounting departments. Identify the tools and processes that are information sources. Common sources for information include networking operating systems (such as Windows, Novell Netware, UNIX NIS), email systems, security systems, PBX (telephone switching) systems, and human resources applications. Determine how centralizing each piece of data affects the management of data. Centralized data management can require new tools and new processes. In some cases, centralization might require staffing and unstaffing in organizations. During the survey, develop a matrix that identifies all the information sources in the enterprise as in the table below: Table 2.2. Information sources example Data Source Class of Data Data Human resources database People Name, address, phone number, department number, manager Email system People, Groups Name, email address, user ID, password, email preferences Facilities system Facilities Building names, floor names, cube numbers, access codes 2.3.3. Characterizing the directory data Characterize the data you want to include in the directory in the following ways: Format Size Number of occurrences in various applications Data owner Relationship to other directory data Find common characteristics in the data you want to include in the directory. This helps save time during the schema design stage described Designing the directory schema . Consider the table below that characterizes the directory data: Table 2.3. Directory data characteristics Data Format Size Owner Related to Employee Name Text string 128 characters Human resources User entry Fax number Phone number 14 digits Facilities User entry Email address Text Many characters IS department User entry 2.3.4. Determining level of service The service level you provide depends on the expectations of the people who rely on directory-enabled applications. To determine the service level that each application requires, determine how and when the application is used. As the directory evolves, the directory may need to support various service levels, from production to mission-critical level. Raising the service level after the directory deployment is difficult, so ensure the initial design meets the future needs. For example, to eliminate the risk of total failure, use a multi-supplier configuration, where several suppliers handles the same data. 2.3.5. Considering a data supplier A data supplier is a server that supplies the data. Storing the same information in multiple locations degrades the data integrity. A data supplier ensures that all information stored in multiple locations is consistent and accurate. The following scenarios require a data supplier: Replication between Directory Servers Synchronization between Directory Server and Active Directory Independent client applications which access the Directory Server data With multi-supplier replication, Directory Server can contain the main copy of information on multiple servers. Multiple suppliers keep changelogs and safely resolve conflicts. You can configure a limited number of supplier servers that can accept changes and replicate the data to replica or consumer servers [1] . Several data supplier servers provide safe failover if a server goes off-line. See TBA[Designing the replication process] for more information about multi-supplier replication. Using synchronization, you can integrate Directory Server users, groups, attributes, and passwords with Microsoft Active Directory users, groups, attributes, and passwords. If you have two directory services, decide whether they will manage the same information, what amount of that information will be shared, and which service will supply data. Preferably, select one application to manage the data and let the synchronization process to add, update, or delete the entries on the other service. Consider the supplier source of the data if you use applications that communicate indirectly with the directory. Keep the data changing processes as simple as possible. After deciding on the place for managing a piece of data, use the same place to manage all of the other data contained there. A single place simplifies troubleshooting when databases lose synchronization across the enterprise. You can implement the following ways to supply data supplying: Managing the data in both the directory and all applications that do not use the directory. Maintaining multiple data suppliers does not require custom scripts for transfering data. In this case, someone must change data on all the other sites to prevent data desynchronization across the enterprise, however this goes against the directory purpose. Managing the data in a non-directory application, and writing scripts, programs, or gateways to import that data into the directory. Managing data in non-directory applications is the most ideal when you already use applications to manage data. Also, you will use the directory only for lookups, for example, for online corporate telephone books. How you maintain the main copies of data depends on the specific directory needs. However, always keep the maintenance simple and consistent. For example, do not attempt to manage data in multiple places and then automatically exchange data between competing applications. Doing so leads to an update loss and increases the administrative overhead. For example, the directory manages an employee home telephone number that is stored in both the LDAP directory and a human resources database. The human resources application is LDAP-enabled and can automatically transfer data from the LDAP directory to the human resources database, and vice versa. If you try to manage changes to that employee telephone number in both the LDAP directory and the human resources database then the last place where the telephone number was changed overwrites the information in the other database. This is only acceptable if the last application that wrote the data had the correct information. If that information is outdated (for example, because the human resources data were restored from a backup), then the correct telephone number in the LDAP directory will be deleted. 2.3.6. Determining data ownership Data ownership refers to the person or organization responsible for making sure the data is up-to-date. During the data design phase, decide who can write data to the directory. Here are some common strategies for deciding data ownership: Allow read-only access to the directory for everyone except a small group of directory content managers. Allow individual users to manage their strategic subset of information, such as their passwords, their role within the organization, their automobile license plate number, and contact information such as telephone numbers or office numbers, descriptive information of themselves. Allow a person manager to write a strategic subset of that person information, such as contact information or job title. Allow an organization administrator to create and manage entries for that organization, enabling them to function as the directory content managers. Create roles that give groups of people read or write access privileges. You can create roles for human resources, finance, or accounting. Allow each of these roles to have read access, write access, or both to the data that the group require. This could include salary information, government identification numbers, and home phone numbers and address. Multiple individuals might require write access to the same information. For example, an information systems or directory management group may require write access to employee passwords. Also employees require the write access to their own passwords. While multiple people can have access to the same information, try to keep this group small and identifiable to ensure data integrity. Additional resources Grouping directory entries Designing a secure directory 2.3.7. Determining data access After determining data ownership, decide who gets access to read each piece of data. For example, employees home phone numbers can be stored in the directory. This data may be useful for a number of users, including the employee manager and human resources department. Employees should be able to read this information for verification purposes. However, home contact information can be considered sensitive. Consider the following for every information stored in the directory: Can someone read the data anonymously? The LDAP protocol supports anonymous access and allows easy lookups for information. However, due to this anonymity, where anyone can access the directory, use this feature wisely. Can someone read the data widely across the enterprise? You can set access control the way that a client must log in to (or bind to) the directory to read specific information. Unlike anonymous access, this type of access control ensures that only members of the organization have access to directory information. In addition, the Directory Server access log contains a record about who accessed the information. For more information about access controls, see Designing access control . Is there an identifiable group of people or applications that must access the data? Anyone who has write privileges to the data also needs read access (with the exception of write access to passwords). The directory can also contain data specific to a particular organization or project group. Identifying these access needs helps determine what groups, roles, and access controls the directory needs. For information about groups and roles, see Designing the directory tree . For information about access controls, see Designing access control . Making these decisions for each piece of directory data defines a security policy for the directory. These decisions depend upon the nature of the site and the security already available at the site. For example, having a firewall or no direct access to the Internet means it is safer to support anonymous access than if the directory is placed directly on the Internet. Additionally, some information may only need access controls and authentication measures to restrict access adequately. Other sensitive information may need to be encrypted within the database as it is stored. Data protection laws in most countries govern how enterprises maintain and access personal information. For example, the laws may prohibit anonymous access to information or require users to have the ability to view and edit information in entries that represent them. Check with the organization legal department to ensure that the directory deployment complies with data protection laws in countries where the enterprise operates. The creation of a security policy and the way it is implemented is described in detail in Designing a secure directory . In replication, a consumer server , or replica server , receives updates from a supplier server or hub server. 2.4. Documenting the site survey Due to the complexity of data design, document the results of the site surveys. Every step of the site survey can use simple tables to track data. You can build a supplier table that outlines the decisions and outstanding concerns. Preferably, use a spreadsheet where you can easily sort and search the content. The table below identifies data ownership and data access for each piece of data identified by the site survey. Table 2.1. Example: Tabulating data ownership and access Data Name Owner Supplier Server/Application Self Read/Write Global Read HR Writable IS Writable Employee name HR PeopleSoft Read-only Yes (anonymous) Yes Yes User password IS Directory US-1 Read/Write No No Yes Home phone number HR PeopleSoft Read/Write No Yes No Employee location IS Directory US-1 Read-only Yes (must log in) No Yes Office phone number Facilities Phone switch Read-only Yes (anonymous) No No Each row in the table indicates the type of information being assessed, the departments that have an interest in it, and how to use and access the information. For example, on the first row, the employee names data have the following management considerations: Owner . Human Resources owns this information and therefore is responsible for its updates and changes. Supplier Server/Application . The PeopleSoft application manages employee name information. Self Read/Write . One can read their own name but not write (or change) it. Global Read . Employee names can be read anonymously by everyone who has access to the directory. HR Writable . Human resources group members can change, add, and delete employee names in the directory. IS Writable . Information services (IS) group members can change, add, and delete employee names in the directory. 2.5. Repeating the site survey You might need more than one site survey, particularly if an enterprise has offices in multiple cities or countries. The informational needs might be so complex that several different organizations have to keep information at their local offices rather than at a single, centralized site. In this case, each office that keeps a main copy of information should perform its own site survey. After the completion of the site survey, the results of each survey should be returned to a central team (probably consisting of representatives from each office) for use in the design of the enterprise-wide data schema model and directory tree. [1] In replication, a consumer server , or replica server , receives updates from a supplier server or hub server.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/planning_and_designing_directory_server/planning-the-directory-data_designing-rhds
Chapter 5. Using the API
Chapter 5. Using the API For more information, see the AMQ C++ API reference and AMQ C++ example suite . 5.1. Handling messaging events AMQ C++ is an asynchronous event-driven API. To define how the application handles events, the user implements callback methods on the messaging_handler class. These methods are then called as network activity or timers trigger new events. Example: Handling messaging events struct example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { std::cout << "The container has started\n"; } void on_sendable(proton::sender& snd) override { std::cout << "A message can be sent\n"; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << "A message is received\n"; } }; These are only a few common-case events. The full set is documented in the API reference . 5.2. Creating a container The container is the top-level API object. It is the entry point for creating connections, and it is responsible for running the main event loop. It is often constructed with a global event handler. Example: Creating a container int main() { example_handler handler {}; proton::container cont {handler}; cont.run(); } 5.3. Setting the container identity Each container instance has a unique identity called the container ID. When AMQ C++ makes a connection, it sends the container ID to the remote peer. To set the container ID, pass it to the proton::container constructor. Example: Setting the container identity proton::container cont {handler, "job-processor-3" }; If the user does not set the ID, the library will generate a UUID when the container is constucted.
[ "struct example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { std::cout << \"The container has started\\n\"; } void on_sendable(proton::sender& snd) override { std::cout << \"A message can be sent\\n\"; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << \"A message is received\\n\"; } };", "int main() { example_handler handler {}; proton::container cont {handler}; cont.run(); }", "proton::container cont {handler, \"job-processor-3\" };" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/using_the_api
21.14. Verifying Virtualization Extensions
21.14. Verifying Virtualization Extensions Use this section to determine whether your system has the hardware virtualization extensions. Virtualization extensions (Intel VT-x or AMD-V) are required for full virtualization. Run the following command to verify the CPU virtualization extensions are available: Analyze the output. The following output contains a vmx entry indicating an Intel processor with the Intel VT-x extension: The following output contains an svm entry indicating an AMD processor with the AMD-V extensions: If any output is received, the processor has the hardware virtualization extensions. However in some circumstances manufacturers disable the virtualization extensions in BIOS. The " flags: " output content may appear multiple times, once for each hyperthread, core or CPU on the system. The virtualization extensions may be disabled in the BIOS. If the extensions do not appear or full virtualization does not work refer to Procedure 21.3, "Enabling virtualization extensions in BIOS" . Ensure KVM subsystem is loaded As an additional check, verify that the kvm modules are loaded in the kernel: If the output includes kvm_intel or kvm_amd then the kvm hardware virtualization modules are loaded and your system meets requirements. Note If the libvirt package is installed, the virsh command can output a full list of virtualization system capabilities. Run virsh capabilities as root to receive the complete list.
[ "grep -E 'svm|vmx' /proc/cpuinfo", "flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm", "flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc", "lsmod | grep kvm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-tips_and_tricks-verifying_virtualization_extensions
4.290. seekwatcher
4.290. seekwatcher 4.290.1. RHBA-2011:1114 - seekwatcher bug fix update An updated seekwatcher package that fixes one bug is now available for Red Hat Enterprise Linux 6. The seekwatcher package generates graphs from blktrace runs to help visualize I/O patterns and performance. It can plot multiple blktrace runs together, making it easy to compare the differences between different benchmark runs. Bug Fix BZ# 681703 Prior to this update, an obsolete "matplotlib" configuration directive in seekwatcher caused seekwatcher to emit a spurious warning when executed. This bug has been fixed in this update and no longer occurs. All users of seekwatcher should upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/seekwatcher
Chapter 1. Remote caches
Chapter 1. Remote caches Deploy multiple Data Grid Server instances to create remote cache clusters that give you a fault-tolerant and scalable data tier with high-speed access from Hot Rod and REST clients. 1.1. Remote cache tutorials To run these tutorials you need at least one locally running instance of Data Grid Server. Each tutorial will try to connect to a running server in localhost:11222 with admin/password credentials. However, if a Docker instance is found, and the server is not running, tutorials will spin up a local server with Testcontainers . You can download the distribution and run the following commands: USD ./bin/cli.sh user create admin -p "password" USD ./bin/server.sh Note Data Grid Server enables authentication and authorization by default. Creating a user named admin gives you administrative access to Data Grid Server. Building and running remote cache tutorials You can build and run remote cache tutorials directly in your IDE or from the command line as follows: USD ./mvnw -s /path/to/maven-settings.xml clean package exec:exec 1.2. Hot Rod Java client tutorials Data Grid requires Java 11 at a minimum. However, Hot Rod Java clients running in applications that require Java 8 can continue using older versions of client libraries. Tutorial link Description Remote cache use example The simplest code example that demonstrates how a remote distributed cache works. Per cache configuration Demonstrates how to configure caches dynamically when we connect to the Data Grid Server. Near caches Demonstrates how configure near caching to improve the read performance in remote caches. Cache Admin API Demonstrates how to use the Administration API to create caches and cache templates dynamically. Encoding Demonstrates how encoding of caches work. Client listeners Detect when data changes in a remote cache with Client Listeners. Query Demonstrates how to query remote cache values. Continuous query Demonstrates how to use Continuous Query and remote caches. Transactions Demonstrates how remote transactions work. Secured caches Demonstrates how to configure caches that have authorization enabled. TLS authorization Demonstrates how to connect to Data Grid Server with TLS authorization. Counters Demonstrates how remote counters work. Multimap Demonstrates how remote multimap works. Task execution Demonstrates how to register server tasks and how to execute them from the Hot Rod client. JUnit 5 and Testcontainers Demonstrates how to use the Data Grid and JUnit 5 extension. Persistence Demonstrates how to use the Data Grid and persistent caches. Redis Client Demonstrates how to use the Data Grid and Redis client to read and write using the Resp protocol. Reactive API Demonstrates how to use the Data Grid with the reactive API based on Mutiny. Data Grid documentation You can find more resources for Hot Rod Java clients in our documentation at: Hot Rod Java client guide Marshalling and Encoding Data Guide Querying Data Grid caches REST API Resp Protocol Smallrye Mutiny
[ "./bin/cli.sh user create admin -p \"password\" ./bin/server.sh", "./mvnw -s /path/to/maven-settings.xml clean package exec:exec" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_code_tutorials/remote-tutorials
Chapter 1. Red Hat OpenShift Pipelines release notes
Chapter 1. Red Hat OpenShift Pipelines release notes Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides: Standard Kubernetes-native pipeline definitions (CRDs). Serverless pipelines with no CI server management overhead. Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko. Portability across any Kubernetes distribution. Powerful CLI for interacting with pipelines. Integrated user experience with the Developer perspective of the OpenShift Container Platform web console. For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines . 1.1. Compatibility and support matrix Some features in this release are currently in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP Technology Preview GA General Availability Table 1.1. Compatibility and support matrix Red Hat OpenShift Pipelines Version Component Version OpenShift Version Support Status Operator Pipelines Triggers CLI Chains Hub Pipelines as Code Results Manual Approval Gate 1.15 0.59.x 0.27.x 0.37.x 0.20.x (GA) 1.17.x (TP) 0.27.x (GA) 0.10.x (TP) 0.2.x (TP) 4.14, 4.15, 4.16 GA 1.14 0.56.x 0.26.x 0.35.x 0.20.x (GA) 1.16.x (TP) 0.24.x (GA) 0.9.x (TP) NA 4.12, 4.13, 4.14, 4.15, 4.16 GA 1.13 0.53.x 0.25.x 0.33.x 0.19.x (GA) 1.15.x (TP) 0.22.x (GA) 0.8.x (TP) NA 4.12, 4.13, 4.14, 4.15 GA 1.12 0.50.x 0.25.x 0.32.x 0.17.x (GA) 1.14.x (TP) 0.21.x (GA) 0.8.x (TP) NA 4.12, 4.13, 4.14 GA 1.11 0.47.x 0.24.x 0.31.x 0.16.x (GA) 1.13.x (TP) 0.19.x (GA) 0.6.x (TP) NA 4.12, 4.13, 4.14 GA 1.10 0.44.x 0.23.x 0.30.x 0.15.x (TP) 1.12.x (TP) 0.17.x (GA) NA NA 4.10, 4.11, 4.12, 4.13 GA For questions and feedback, you can send an email to the product team at [email protected] . 1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.3. Release notes for Red Hat OpenShift Pipelines 1.15 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.15 is available on OpenShift Container Platform 4.14 and later versions. 1.3.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.15: 1.3.1.1. Pipelines With this update, when incorporating a step from another custom resource (CR) using a stepRef: section, you can use parameters in the same way that you use parameters in taskRef: and pipelineRef: sections. Example usage apiVersion: tekton.dev/v1 kind: Task metadata: name: test-task spec: steps: - name: fetch-repository stepRef: resolver: git params: - name: url value: https://github.com/tektoncd/catalog.git - name: revision value: main - name: pathInRepo value: stepaction/git-clone/0.1/git-clone params: - name: url value: USD(params.repo-url) - name: revision value: USD(params.tag-name) - name: output-path value: USD(workspaces.output.path) Before this update, when using a resolver to incorporate a task or pipeline from a remote source, if one of the parameters expected an array you had to specify the type of the parameter explicitly. With this update, when using a resolver to incorporate a task or pipeline from a remote source, you do not have to set the type of any parameters. With this update, when specifying the use of a workspace in a pipeline run or task run, you can use parameters and other variables in the specification in the secret , configMap , and projected.sources sections. Example usage apiVersion: tekton.dev/v1 kind: Task metadata: generateName: something- spec: params: - name: myWorkspaceSecret steps: - image: registry.redhat.io/ubi/ubi8-minimal:latest script: | echo "Hello World" workspaces: - name: myworkspace secret: secretName: USD(params.myWorkspaceSecret) By default, when OpenShift Pipelines fails to pull the container image that is required for the execution of a task, the task fails. With this release, you can configure an image pull backoff timeout. If you configure this timeout, when OpenShift Pipelines fails to pull the container image that is required for the execution of a task, it continues to attempt to pull the image for the specified time period. The task fails if OpenShift Pipelines is unable to pull the image within the specified period. Example specification apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: configMaps: config-defaults: data: default-imagepullbackoff-timeout: "5m" With this release, the YAML manifest of a completed pipeline run or task run includes a displayName field in the childReferences section. This field contains the display name of the pipeline run or task run, which can differ from the full name of the pipeline run or task run. With this update, the YAML manifest of every step in a completed TaskRun CR includes a new terminationReason field. This field contains the reason the step execution ended. OpenShift Pipelines uses the following values for the terminationReason field: Completed : The step completed successfully and any commands invoked in the step ended with exit code 0. Continued : There was an error during execution of the step, for example a command returned a non-zero exit code, but the step execution continued because the onError value was set to continue . See the log output for the details of the error. Error : There was an error during execution of the step, for example a command returned a non-zero exit code, and this error caused the step to fail. See the log output for the details of the error. TimeoutExceeded : The execution of the step timed out. See the log output for the details of the timeout. Skipped : The step was skipped because a step failed. TaskRunCancelled : The task run was cancelled. With this update, you can use the pipeline.disable-inline-spec spec in the TektonConfig CR to disable specifying pipelines and tasks inside PipelineRun CRs, specifying tasks inside Pipeline CRs, or specifying tasks inside TaskRun CRs. If you use this option, you must refer to pipelines by using the pipelineRef: specification and refer to tasks by using the taskRef: specification. With this update, some metrics for Prometheus monitoring of OpenShift Pipelines were renamed to ensure compliance with the Prometheus naming convention. Gauge and Counter metric names no longer end with count . 1.3.1.2. Operator With this update, several tasks are added to the openshift-pipelines namespace in the resolverTasks add-on. You can incorporate these tasks in your pipelines using the cluster resolver. Most of these tasks were previously available as cluster tasks ( ClusterTask resources). You can access the following tasks by using the cluster resolver: buildah git-cli git-clone kn kn-apply maven openshift-client s2i-dotnet s2i-go s2i-java s2i-nodejs s2i-perl s2i-php s2i-python s2i-ruby skopeo-copy tkn With this update, you can set the pruner.startingDeadlineSeconds spec in the TektonConfig CR. If the pruner job that removes old resources associated with pipeline runs and task runs is not started at the scheduled time for any reason, this setting configures the maximum time, in seconds, in which the job can still be started. If the job is not started within the specified time, OpenShift Pipelines considers this job failed and starts the pruner at the scheduled time. With this update, you can use the targetNamespaceMetadata spec in the TektonConfig CR to set labels and annotations for the openshift-pipelines namespace in which the Operator installs OpenShift Pipelines. With this update, error messages for the OpenShift Pipelines Operator include additional context information such as namespace. 1.3.1.3. Triggers With this update, you can use the TriggerTemplate CR to specify templates for any types of resources. When the trigger is invoked, OpenShift Pipelines creates the resources that you define in the TriggerTemplate CR for the trigger. In the following example, a ConfigMap resource is created when the trigger is invoked: Example TriggerTemplate CR apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: create-configmap-template spec: params: - name: action resourcetemplates: - apiVersion: v1 kind: ConfigMap metadata: generateName: sample- data: field: "Action is : USD(tt.params.action)" With this update, you can define the ServiceType in an EventListener CR as NodePort and define the port number for the event listener, as shown in the following example: Example EventListener CR defining a port number apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: simple-eventlistener spec: serviceAccountName: simple-tekton-robot triggers: - name: simple-trigger bindings: - ref: simple-binding template: ref: simple-template resources: kubernetesResource: serviceType: NodePort servicePort: 38080 With this update, if you use a serviceType value of LoadBalancer in an EventListener CR, you can optionally specify a load balancer class in the serviceLoadBalancerClass field. If your cluster provides multiple load balancer controllers, you can use the load balancer class to select one of these controllers. For more information about setting a load balancer class, see the Kubernetes documentation . Example specifying a LoadBalancerClass setting apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: listener-loadbalancerclass spec: serviceAccountName: tekton-triggers-example-sa triggers: - name: example-trig bindings: - ref: pipeline-binding - ref: message-binding template: ref: pipeline-template resources: kubernetesResource: serviceType: LoadBalancer serviceLoadBalancerClass: private 1.3.1.4. Manual approval With this update, OpenShift Pipelines includes the new Manual Approval Gate functionality. Manual Approval Gate is a custom resource definition (CRD) controller. You can use this controller to add manual approval points in the pipeline so that the pipeline stops at that point and waits for a manual approval before continuing execution. To use this feature, specify an ApprovalTask in the pipeline, in a similar way to specifying a Task . Users can provide the approval by using the web console or by using the opc command line utility. The Manual Approval Gate controller includes the following features: You must set the following parameters in the ApprovalTask specification: approvers : The users who can approve or reject the approvalTask to unblock the pipeline numberOfApprovalsRequired : The number of approvals required to unblock the pipeline description : (Optional) Description of the approvalTask that OpenShift Pipelines displays to the users The manual approval gate supports approval from multiple users: The approval requires the configured minimum number of approvals from the configured users. Until this number is reached, the approval task does not finish and its approvalState value remains pending . If any one approver rejects the approval, the ApprovalTask controller changes the approvalState of the task to rejected and the pipeline run fails. If one user approves the task but the configured number of approvals is still not reached, the same user can change to rejecting the task and the pipeline run fails. Users can provide approval using the opc approvaltask CLI and the OpenShift web console. Approval in the OpenShift web console requires installation of the OpenShift Pipelines web console plugin. This plugin requires OpenShift Container Platform version 4.15 or later. Users can add messages while approving or rejecting the approvalTask . You can add a timeout setting to the approvalTask specification. If the required number of approvals is not provided during this time period, the pipeline run fails. Important The manual approval gate is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.3.1.5. CLI With this update, the tkn command line utility supports the -E or --exit-with-pipelinerun-error option for the pipeline showlog command. With this option, the command line utility returns an error code of 0 if the pipeline run completed successfully, 1 if the pipeline run ended with an error, and 2 if the status of the pipeline run is unknown. With this update, the tkn command line utility supports the --label option for the bundle push command. With this option, you can provide the value of a label in the <label-name>=<value> format; the utility adds the label to the OCI image that it creates. You can use this option several times to provide several labels for the same image. 1.3.1.6. Pipelines as Code With this update, when using Pipelines as Code, you can set a pipelinesascode.tekton.dev/on-comment annotation on a pipeline run to start the pipeline run when a developer adds a matching comment to a pull request. This setting is supported only for pull requests and only for GitHub and GitLab repository providers. Important Matching a comment event to a pipeline run is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, when using Pipelines as Code, you can enter the /test <pipeline_run_name> comment on a pull request to start any Pipelines as Code pipeline run on the repository, whether or not it was triggered by an event for this pipeline run. This feature is Technological preview only. With this update, when providing a /test or /retest command for Pipelines as Code in a Git request comment, you can now set any standard or custom parameters for the pipeline run. Example commands in a Git request comment This command runs the pipelinerun1 pipeline run on the main branch instead of the pull request branch. This command runs the checker pipeline run on a backport (cherry-pick) of the pull request to the backport-branch branch. With this update, when using Pipelines as Code, you can create a global Repository CR with the name pipelines-as-code in the namespace in which OpenShift Pipelines is installed, normally openshift-pipelines . In this CR, you can set the configuration options that apply to all Repository CRs. You can override any of these default options by setting different values in the Repository CR for a particular repository. Important The global Repository CR is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, Pipelines as Code processes both the OWNERS file and the OWNERS_ALIASES file when determining which users can trigger pipeline runs. However, if the OWNERS file includes a filters section, Pipelines as Code matches approvers and reviewers only against the .* filter. With this update, when Pipelines as Code generates a random secret name for storing the GitHub temporary token, it uses two additional random characters. This change decreases the probability of a collision in secret names. With this update, when a pipeline run defined by using Pipelines as Code causes a YAML validation error, OpenShift Pipelines reports the error and the pipeline run name in the event log of the user namespace where the pipeline run executes, as well as in the OpenShift Pipelines controller log. The error report is also displayed in the Git repository provider, for example, in the GitHub CheckRun user interface. With this change, a user who does not have access to the controller namespace can access the error messages. 1.3.1.7. Tekton Results Tekton Results uses an UpdateLog operation to store logging information in the database. With this update, you can use the TektonResult CR to specify a timeout value for this operation. If the operation does not complete within the specified time period, Tekton Results ends the operation. Example specification apiVersion: operator.tekton.dev/v1 kind: TektonResult metadata: name: result spec: options: deployments: tekton-results-watcher: spec: template: spec: containers: - name: watcher args: - "--updateLogTimeout=60s" With this update, when configuring Tekton Results, you can optionally specify the following database configuration settings in the options.configMaps.tekton-results-api-config.data.config section of the TektonResult CR: DB_MAX_IDLE_CONNECTIONS : The maximum number of idle connections to the database server that can remain open DB_MAX_OPEN_CONNECTIONS : The maximum total number of connections to the database server that can remain open GRPC_WORKER_POOL : The size of the GRPC worker pool K8S_QPS : The Kubernetes client QPS setting K8S_BURST : The Kubernetes client burst QPS setting If you want to use this setting, when configuring Tekton Results you must also use alternate specs for several other configuration parameters, as listed in the following table. Both the regular and the alternate parameter specs are in the TektonResult CR. Table 1.2. Alternate configuration parameters for Tekton Results Regular parameter spec Alternate parameter spec logs_api options.configMaps.tekton-results-api-config.data.config.LOGS_API log_level options.configMaps.tekton-results-api-config.data.config.LOG_LEVEL db_port options.configMaps.tekton-results-api-config.data.config.DB_PORT db_host options.configMaps.tekton-results-api-config.data.config.DB_HOST logs_path options.configMaps.tekton-results-api-config.data.config.LOGS_PATH logs_type options.configMaps.tekton-results-api-config.data.config.LOGS_TYPE logs_buffer_size options.configMaps.tekton-results-api-config.data.config.LOGS_BUFFER_SIZE auth_disable options.configMaps.tekton-results-api-config.data.config.AUTH_DISABLE db_enable_auto_migration options.configMaps.tekton-results-api-config.data.config.DB_ENABLE_AUTO_MIGRATION server_port options.configMaps.tekton-results-api-config.data.config.SERVER_PORT prometheus_port options.configMaps.tekton-results-api-config.data.config.PROMETHEUS_PORT gcs_bucket_name options.configMaps.tekton-results-api-config.data.config.GCS_BUCKET_NAME For the configuration parameters not listed in this table, use the regular specs as described in the documentation. Important Use the alternate parameter specs only if you need to use the additional settings in the options.configMaps.tekton-results-api-config.data.config section of the TektonResult CR. With this update, you can use the Tekton Results API to retrieve the Go profiling data for Tekton Results. Before this update, Tekton Results checked the user authentication when displaying every fragment of log data. With this update, Tekton Results checks the user authentication only once per log data request. This change improves the response time for the Tekton Results log API, which is used for displaying logs using the command line utility. 1.3.2. Breaking changes With this update, the OpenShift Pipelines console plugin, which is required for viewing pipeline and task execution statistics in the web console and for using the manual approval gate, requires OpenShift Container Platform version 4.15 or a later version. Before this update, Pipelines as Code set the git-provider , sender , and branch labels in a pipeline run. With this update, Pipelines as Code no longer sets these labels, Instead, it sets the pipelinesascode.tekton.dev/git-provider , pipelinesascode.tekton.dev/sender , and pipelinesascode.tekton.dev/branch annotations. With this update, you can no longer use the jaeger exporter for OpenTelemetry tracing. You can use the oltptraceexporter for tracing. 1.3.3. Known issues The new skopeo-copy task, which is available from the openshift-pipelines namespace by using the cluster resolver, does not work when the VERBOSE parameter is set to false , which is the default setting. As a workaround, when you use this task, set the VERBOSE parameter to true . The issue does not apply to the skopeo-copy ClusterTask . The new skopeo-copy task, which is available from the openshift-pipelines namespace by using the cluster resolver, fails when you use it to push or pull an image to or from an OpenShift Container Platform internal image repository, such as image-registry.openshift-image-registry.svc:5000 . As a workaround, set the DEST_TLS_VERIFY or SRC_TLS_VERIFY parameter to false . Alternatively, use an external image repository that has a valid SSL certificate. The issue does not apply to the skopeo-copy ClusterTask . The new s2i-* tasks, which are available from the openshift-pipelines namespace by using the cluster resolver, fail if you clone a Git tepository to a subdirectory of the source workspace and then set the CONTEXT parameter of the task. As a workaround, when you use these tasks, do not set the CONTEXT parameter. The issue does not apply to the s2i-* ClusterTasks . The new git-clone task, which is available from the openshift-pipelines namespace by using the cluster resolver, does not set the COMMIT result value. The issue does not apply to the git-clone ClusterTask . The jib-maven ClusterTask does not work if you are using OpenShift Container Platform version 4.16. When using Pipelines as Code, if you set the concurrency_limit spec in the global Repository CR named pipelines-as-code in the openshift-pipelines namespace, which provides default settings for all Repository CRs, the Pipelines as Code watcher crashes. As a workaround, do not set this spec in this CR. Instead, set the concurrency_limit spec in the other Repository CRs that you create. When using Pipelines as Code, if you set the settings.pipelinerun_provenance spec in the global Repository CR named pipelines-as-code in the openshift-pipelines namespace, which provides default settings for all Repository CRs, the Pipelines as Code controller crashes. As a workaround, do not set this spec in this CR. Instead, set the settings.pipelinerun_provenance spec in the other Repository CRs that you create. 1.3.4. Fixed issues Before this update, many info messages about ClusterTask resources being repeatedly reconciled were present in the OpenShift Pipelines Operator log. With this update, the excessive reconciliation no longer happens and the excessive messages do not appear. If the reconciliation messages still appear, remove the earlier version of the ClusterTask installerset resource. However, if you remove the installerset resource, you cannot reference ClusterTasks with this specified version in your pipelines. Enter the following command to list installerset resources: USD oc get tektoninstallersets The names for versioned ClusterTask installerset resources are addon-versioned-clustertasks-<version>-<unique_id> , for example, addon-versioned-clustertasks-1.12-fblb8 . Enter the following command to remove an installerset resource: USD oc delete tektoninstallerset <installerset_name> Before this update, if a task run or pipeline run referenced a service account and this service account referenced a secret that did not exist, the task run or pipeline run failed. With this update, the task run or pipeline run logs a warning and continues. Before this update, when you referenced a StepAction CR inside a step of a task, OpenShift Pipelines passed all parameters of the step to the StepAction CR. With this update, OpenShift Pipelines passes only the parameters defined in the StepAction CR to the step action. Before this update, if you defined a parameter of a task within a pipeline twice, OpenShift Pipelines logged the wrong path to the definition in the error message. With this update, the error message contains the correct path. Before this update, if you specified a task under the finally: clause of a pipeline, used an expression in the when: clause of this task, and referenced the status of another task in this expression (for example, 'USD(tasks.a-task.status)' == 'Succeeded' ), this expression was not evaluated correctly. With this update, the expression is evaluated correctly. Before this update, if you specified a negative number of retries when specifying a task run, OpenShift Pipelines did not detect the error. With this update, OpenShift Pipelines detects and reports this error. Before this update, when you use a pipelineRef: section inside a task of a pipeline to reference another pipeline or when you use a pipelineSpec: section inside a task of a pipeline to specify another pipeline, the OpenShift Pipelines controller could crash. With this update, the crash does not happen and the correct error message is logged. Use of pipelineRef: and pipelineSpec: sections inside a pipeline is not supported. Before this update, when you configured a task to use a workspace using the workspace.<workspace_name>.volume keyword and then the task failed and was retried, creation of the pod for the task failed. With this update, the pod is created successfully. Before this update, OpenShift Pipelines sometimes modified recorded annotations on a completed pipeline run or task run after its completion. For example, the pipeline.tekton.dev/release annotation records the version information of the pipeline, and if the pipeline version was updated after the execution of the pipeline run, this annotation could be changed to reflect the new version instead of the version that was run. With this update, the annotations reflect the status of the pipeline run when it was completed and OpenShift Pipelines does not modify the annotations later. Before this update, if a YAML manifest that a pipeline run uses (for example, the manifest of a task or pipeline) had syntax errors, the logged error message was unspecific or no error message was logged. With this update, the logged error message includes the syntax errors. Before this update, when you used the buildah cluster task with a secret with the .dockerconfigjson file provided using a workspace, the task failed during the cp command because the /root/.docker directory did not exist. With this update, the task completes successfully. Before this update, if a pipeline run timed out and a TaskRun or CustomRun resource that this pipeline run included was deleted, the pipeline run execution was blocked and never completed. With this update, the execution correctly ends, logging a canceled state. Before this update, when using a resolver to incorporate a task from a remote source, the resolver automatically added the kind value of Task to the resulting specification. With this update, the resolver does not add a kind value to the specification. Before this update, when you set configuration options using an options: section in the TektonConfig CR, these options were sometimes not applied correctly. With this update, the options are applied correctly. Before this update, if you set the enable-api-fields field and certain other fields in the TektonConfig CR, the settings were lost after any update of OpenShift Pipelines. With this update, the settings are preserved during updates. Before this update, if you configured the horizontal pod autoscaler (HPA) using the options section in the TektonConfig CR, any existing HPA was updated correctly but a new HPA was not created when required. With this update, HPA configuration using the options section works correctly. Before this update, you could erroneously change the targetNamespace field in the TektonConfig CR, creating an unsupported configuration. With this update, you can no longer change this field. Changing the target namespace name from openshift-pipelines is not supported. Before this update, if the pipelines-scc-rolebinding rolebinding was missing or deleted in any namespace, the OpenShift Pipelines Operator controller failed to create default resources in new namespaces correctly. With this update, the controller functions correctly. Before this update, when you specified a namespaceSelector value when defining a triggerGroup in an EventListener CR, the event listener was unable to access triggers in the specified namespace if it was not the same as the namespace of the event listener. With this update, the event listener can access triggers in the specified namespace. Before this update, when a request was sent to an EventListener route URL with a Content-Type header, this header was not passed to the interceptor. With this update, the header is passed to the interceptor. With this update, several potential causes for Tekton Results becoming unresponsive, crashing, or consuming a large amount of memory were removed. Before this update, in the Pipeline details page of the web console, if a when expression using CEL was configured for a task, information was not displayed correctly. With this update, the information is displayed correctly. Before this update, in the Pipeline details page of the web console, the menu was not visible when you enabled dark mode in the web console. With this update, the menu is visible. Before this update, in the Pipelines page of the web console, information about running statistics of pipelines did not include the information saved in Tekton Results. With this update, the page includes all available running statistics information for every pipeline. Before this update, when you viewed a list of tasks for a namespace in the web console, a task from another namespace was sometimes displayed in the list. With this update, the web console correctly lists tasks for each namespace. Before this update, when you viewed the list of task runs in the web console, the status for each task run was not displayed. With this update, the list of task runs in the web console includes the status for each task run. Before this update, if you disabled cluster tasks in your OpenShift Pipelines deployment, the Pipeline Builder in the web console did not work. With this update, if you disable cluster tasks, the Pipeline Builder in the web console works correctly. Before this update, the OpenShift Pipelines console plugin pod did not move to the node specified using the nodeSelector , tolerations , and priorityClassName settings. With this update, the OpenShift Pipelines plugin pod moves to the correct node. Before this update, the Pipelines as Code watcher sometimes crashed when processing a pipeline run for which a concurrency limit was not set. With this update, these pipeline runs are processed correctly. Before this update, in Pipelines as Code, a concurrency limit setting of 0 was not interpreted as disabling the concurrency limit. With this update, a concurrency limit setting of 0 disables the concurrency limit. Before this update, when you defined annotations and labels for a task in Pipelines as Code, the annotations and labels were not set on the pod that is running the task. With this update, Pipelines as Code correctly sets the configured annotations and labels on the pod that is running the task. Before this update, Pipelines as Code sometimes caused a load on the Kubernetes service by re-reading an internal configuration ConfigMap resource frequently. With this update, Pipelines as Code no longer causes this load, because it reloads the ConfigMap resource only after the ConfigMap resource is modified. Before this update, when using Pipelines as Code, when you deleted a comment on a pull request such as /test or /retest , Pipelines as Code executed the command in the comment again. With this update, Pipelines as Code executes a command only when you add the comment. Before this update, when using Pipelines as Code, if some pipeline runs for a pull request failed and then re-ran successfully after a /test or /retest command without pushing a new commit, the user interface of the Git provider, such as GitHub, displayed the failure result along with the new result. With this update, the up-to-date status is displayed. Before this update, when you used the tkn pr logs -f command to view the logs for a running pipeline, the command line utility stopped responding, even if the pipeline run completed successfully. With this update, the tkn pr logs -f command properly displays the log information and exits. 1.3.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.15.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.15.1 is available on OpenShift Container Platform 4.14 and later versions. 1.3.5.1. New features Before this update, in the TektonConfig CR, the chain.artifacts.pipelinerun.enable-deep-inspection spec supported only the bool value type. With this update, the chain.artifacts.pipelinerun.enable-deep-inspection spec supports both the bool and string value types; if using the string value type, the valid values for this spec are "true" and "false" . 1.3.5.2. Fixed issues Before this update, when you used the git-clone task, which is available from the openshift-pipelines namespace, this task did not return the COMMIT result. With this update, the task returns the correct value in the COMMIT result. Before this update, when you used a resolver to include a StepAction resource in a pipeline or task, the pipeline or task failed and an extra params passed by Step to StepAction error message was logged. With this update, the pipeline or task completes correctly. Before this update, when you enabled the OpenShift Pipelines plugin, viewed the details page for a pipeline in the web console, and selected Edit Pipeline from the menu, the console displayed the YAML specification of the pipeline. With this update, the console displays the Pipeline Builder page. Before this update, in OpenShift Pipelines version 1.15.0, when you added a comment on a pull request, Pipelines as Code set an event type depending on the comment content, for example, retest-comment or on-comment . With this update, the event type after a pull request comment is always pull_request , similar to OpenShift Pipelines version 1.14 and earlier. 1.3.5.3. Breaking changes Before this update, when using Pipelines as Code, the pipeline run executed correctly if you specified the podTemplate parameters for a pipeline run using one of the following wrong ways for the API version: For the v1beta1 API, in the taskRunTemplate.podTemplate spec For the v1 API, in the podTemplate spec With this update, when a pipeline run includes either of the incorrect specifications, the podTemplate parameters are disregarded. To avoid this problem, define the pod template correctly for the API version that you are using, as in one of the following examples: Example of specifying a pod template in the v1 API apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pr-v1 spec: pipelineSpec: tasks: - name: noop-task taskSpec: steps: - name: noop-task image: registry.access.redhat.com/ubi9/ubi-micro script: | exit 0 taskRunTemplate: podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001 Example of specifying a pod template in the v1beta1 API apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: pr-v1beta1 spec: pipelineSpec: tasks: - name: noop-task taskSpec: steps: - name: noop-task image: registry.access.redhat.com/ubi9/ubi-micro script: | exit 0 podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001 1.3.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.15.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.15.2 is available on OpenShift Container Platform 4.14 and later versions. 1.3.6.1. Fixed issues Before this update, when you passed a parameter value to a pipeline or task and the value included more than one variable with both full and short reference formats, for example, USD(tasks.task-name.results.variable1) + USD(variable2) , OpenShift Pipelines did not interpret the value correctly. The pipeline run or task run could stop execution and the Pipelines controller could crash. With this update, OpenShift Pipelines interprets the value correctly and the pipeline run or task run completes. Before this update, in the web console, sorting of pipeline runs and task runs by creation time did not work. Sorting of pipelines by last run time also did not work. With this update, the sorting works correctly. Before this update, in the web console, if you enabled the OpenShift Pipelines console plugin, when starting a pipeline you could not select the storage class for the volume claim template, because the StorageClass list was not present in the VolumeClaimTemplate options. With this update, you can select the storage class for the volume claim template. Before this update, if you used Pipelines as Code, the list of pipeline runs did not display correctly in the Repository details page of the web console. With this update, the list of pipeline runs displays correctly. 1.4. Release notes for Red Hat OpenShift Pipelines General Availability 1.14 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.14 is available on OpenShift Container Platform 4.12 and later versions. 1.4.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.14: 1.4.1.1. Pipelines With this update, you can use a parameter of a task or pipeline or a result of a task to specify the name of a resource to bind to a workspace, for example, name: USD(params.name)-configmap . With this update, OpenShift Pipelines supports using your existing entitlements for Red Hat Enterprise Linux in build processes within your pipelines. The built-in buildah cluster task can now use these entitlements. With this update, if a pipeline run or task run uses the pipeline service account, you can use CSI volume types in the pipeline or task. With this update, you can use a StepAction custom resource (CR) to define a reusable scripted action that you can invoke from any number of tasks. To use this feature, you must set the pipeline.options.configMaps.feature-flags.data.enable-step-actions spec in the TektonConfig CR to true . With this update, object parameters and array results are enabled by default. You do not need to set any flags to use them. With this update, you can use the HTTP resolver to fetch a pipeline or task from an HTTP URL, as shown in the following examples: Example usage for a task apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: remote-task-reference spec: taskRef: resolver: http params: - name: url value: https://raw.githubusercontent.com/tektoncd-catalog/git-clone/main/task/git-clone/git-clone.yaml Example usage for a pipeline apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: http-demo spec: pipelineRef: resolver: http params: - name: url value: https://raw.githubusercontent.com/tektoncd/catalog/main/pipeline/build-push-gke-deploy/0.1/build-push-gke-deploy.yaml With this update, you can use an enum declaration to limit the values that you can supply for a parameter of a pipeline or task, as shown in the following example. To use this feature, you must set the pipeline.options.configMaps.feature-flags.data.enable-param-enum spec in the TektonConfig CR to true . Example usage apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: pipeline-param-enum spec: params: - name: message enum: ["v1", "v2"] default: "v1" # ... With this update, when using the Git resolver with the authenticated source control management (SCM) API, you can override the default token, SCM type, and server URL that you configured. See the following example: Example usage apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: git-api-demo-tr spec: taskRef: resolver: git params: - name: org value: tektoncd - name: repo value: catalog - name: revision value: main - name: pathInRepo value: task/git-clone/0.6/git-clone.yaml # create the my-secret-token secret in the namespace where the # pipelinerun is created. The secret must contain a GitHub personal access # token in the token key of the secret. - name: token value: my-secret-token - name: tokenKey value: token - name: scmType value: github - name: serverURL value: https://ghe.mycompany.com With this update, you can define default resource requirements for containers and init-containers in pods that OpenShift Pipelines creates when executing task runs. Use the pipeline.options.configMaps.config-defaults.default-container-resource-requirements spec in the TektonConfig CR to set these requirements. You can set the default values for all containers and also for particular containers by name or by prefix, such as sidecar-* . 1.4.1.2. Operator With this update, OpenShift Pipelines supports horizontal pod autoscaling for the Operator proxy webhook. If the pod that runs the Operator proxy webhook reaches 85% CPU utilization, the autoscaler creates another replica of the pod. If you want to use more than one replica for the Operator proxy webhook at startup, you must configure this number in the options.horizontalPodAutoscalers spec of the TektonConfig CR. With this update, the internal leader election for several components of OpenShift Pipelines was improved. The Operator controller, Operator webhook, proxy webhook, Pipelines as Code watcher, Pipelines as Code webhook, and the Tekton Chains controller now use separate leader election ConfigMaps. The leader election affects which replica of a component processes a request. Before this update, when you scaled up the number of replicas of the OpenShift Pipelines controller, manual intervention was required to enable the use of the new replicas; namely, you needed to delete leases in the leader election. With this update, when you scale up the number of replicas of the OpenShift Pipelines controller, the leader election includes the new replicas automatically, so the new replicas can process information. With this update, you can optionally set the following flags in the spec.pipeline spec of the TektonConfig CR: coschedule enable-cel-in-whenexpression enable-param-enum enable-step-actions enforce-nonfalsifiability keep-pod-on-cancel max-result-size metrics.count.enable-reason results-from set-security-context default-resolver-type 1.4.1.3. Triggers With this update, when specifying CEL expressions for the Triggers interceptor, you can use the first and last functions to access values in a JSON array. With this update, when specifying CEL expressions for the Triggers interceptor, you can use the translate function that facilitates the utilization of regular expressions to replace characters with specified strings, as in the following example: Sample use of the translate function Sample input string Sample result string 1.4.1.4. Web console With this update, you can enable the web console plugin for OpenShift Pipelines. If you enable the plugin, you can view pipeline and task execution statistics in the Pipelines overview page and in the page of a pipeline. You must install Tekton Results to view this information. Note To use the web console plugin for OpenShift Pipelines, you must use, at a minimum, the following OpenShift Container Platform releases: For OpenShift Container Platform version 4.12: 4.12.51 For OpenShift Container Platform version 4.13: 4.13.34 For OpenShift Container Platform version 4.14: 4.14.13 For OpenShift Container Platform version 4.15: any release With this update, if you are using OpenShift Container Platform 4.15 and you enabled the console plugin, you can view archive information about past pipeline runs and task runs. Tekton Results provides this information. With this update, the PipelineRun details page, accessible from both the Developer or Administrator perspective of the web console, introduces a Vulnerabilities row. This new row offers a visual representation of identified vulnerabilities, categorized by severity (critical, high, medium, and low). To enable this feature, update your tasks and associated pipelines to the specified format. Additionally, once enabled, you can also access the information about identified vulnerabilities through the Vulnerabilities column in the pipeline run list view page. With this update, the PipelineRun details page, accessible from both the Developer or Administrator perspective of the web console, provides an option to download or view Software Bill of Materials (SBOMs) for enhanced transparency and control. To enable this feature, update your tasks and associated pipelines to the specified format. 1.4.1.5. CLI With this update, the tkn version command displays the version of the Tekton Hub component if this component is installed. With this update, you can use the tkn customrun list command to list custom runs. With this update, when using the tkn task start command, you can specify a URL for an OCI image in the -i or --image argument. The command pulls the image and runs the specified task from this image. With this update, the opc version command displays the version of the Tekton Results CLI component, which is a part of the opc utility. 1.4.1.6. Pipelines as Code With this update, when using Pipelines as Code, you can specify the pipelinesascode.tekton.dev/pipeline annotation on a pipeline run to fetch the pipeline from a Tekton Hub instance. The value of this annotation must refer to a single pipeline on Tekton Hub. With this update, you can deploy an additional Pipelines as Code controller with different configuration settings and different secrets. You can use multiple Pipelines as Code controllers to interact with multiple GitHub instances. With this update, Pipelines as Code includes metrics publication for the GitLab and BitBucket providers. You can access the metrics using the /metrics path on the Pipelines as Code controller and watcher service, port 9090. With this update, when specifying the conditions for executing a pipeline run using a CEL expression with pipelinesascode.tekton.dev/on-cel-expression , you can check for existence of files in the Git repository: files.all.exists(x, x.matches('<path_or_regular_expression>')) for all files files.added.exists(x, x.matches('<path_or_regular_expression>')) for files that were added since the last run of this pipeline files.modified.exists(x, x.matches('<path_or_regular_expression>')) for files that were modified since the last run of this pipeline files.deleted.exists(x, x.matches('<path_or_regular_expression>')) for files that were deleted since the last run of this pipeline files.renamed.exists(x, x.matches('<path_or_regular_expression>')) for files that were renamed since the last run of this pipeline; this expression checks the new names of the renamed files 1.4.1.7. Tekton Chains With this update, Tekton Chains supports the v1 value of the API version. With this update, you can set the artifacts.pipelinerun.enable-deep-inspection parameter in the TektonConfig CR. When this parameter is true , Tekton Chains records the results of the child task runs of a pipeline run. When this parameter is false , Tekton Chains records the results of the pipeline run but not of its child task runs. With this update, you can set the builddefinition.buildtype parameter in the TektonConfig CR to set the build type for in-toto attestation. When this parameter is https://tekton.dev/chains/v2/slsa , Tekton Chains records in-toto attestations in strict conformance with the SLSA v1.0 specification. When this parameter is https://tekton.dev/chains/v2/slsa-tekton , Tekton Chains records in-toto attestations with additional information such as the labels and annotations in each task run and pipeline run, and also adds each task in a pipeline run under resolvedDependencies . Before this update, when Tekton Chains was configured to use gcs storage, Tekton Chains did not record pipeline run information. With this update, Tekton Chains records pipeline run information with this storage. With this update, performance metrics are available for Tekton Chains. To access the metrics, expose the tekton-chains-metrics service and then use the /metrics path on this service, port 9090. These metrics are also available in the OpenShift Container Platform Monitoring stack. With this update, Tekton Chains uses the new v2alpha3 record format version when recording pipeline runs and task runs that use the v1 version value. With this update, Tekton Chains uses the v1 version of pipeline run and task run formats internally. 1.4.1.8. Tekton Results With this update, if Tekton Results is installed, Tekton Results records the summary and record data for pipeline runs started using Pipelines as Code. With this update, Tekton Results provides up to 100 megabytes of logging information for a pipeline or task. With this update, any authenticated user can view the tekton-results-api-service route in the openshift-pipelines namespace to interact with Tekton Results using a REST API. With this update, the Tekton Results API includes a new endpoint for fetching summary and aggregation for a list of records. With this update, the GetLog endpoint of the Tekton Results API returns raw bytes with the text/plain content type. With this update, you can optionally specify a custom CA certificate in the options.configMaps.tekton-results-api-config.data.config.DB_SSLROOTCERT spec in the TektonResult CR. In this case, Tekton Results requires an SSL connection to the database server and uses this certificate for the connection. If you want to use this setting, when configuring Tekton Results you must also use alternate specs for several other configuration parameters, as listed in the following table. Both the regular and the alternate parameter specs are in the TektonResult CR. Table 1.3. Alternate configuration parameters for Tekton Results Regular parameter spec Alternate parameter spec logs_api options.configMaps.tekton-results-api-config.data.config.LOGS_API log_level options.configMaps.tekton-results-api-config.data.config.LOG_LEVEL db_port options.configMaps.tekton-results-api-config.data.config.DB_PORT db_host options.configMaps.tekton-results-api-config.data.config.DB_HOST logs_path options.configMaps.tekton-results-api-config.data.config.LOGS_PATH logs_type options.configMaps.tekton-results-api-config.data.config.LOGS_TYPE logs_buffer_size options.configMaps.tekton-results-api-config.data.config.LOGS_BUFFER_SIZE auth_disable options.configMaps.tekton-results-api-config.data.config.AUTH_DISABLE db_enable_auto_migration options.configMaps.tekton-results-api-config.data.config.DB_ENABLE_AUTO_MIGRATION server_port options.configMaps.tekton-results-api-config.data.config.SERVER_PORT prometheus_port options.configMaps.tekton-results-api-config.data.config.PROMETHEUS_PORT gcs_bucket_name options.configMaps.tekton-results-api-config.data.config.GCS_BUCKET_NAME For the configuration parameters not listed in this table, use the regular specs as described in the documentation. Important Use the alternate parameter specs only if you need to use the DB_SSLROOTCERT setting. 1.4.2. Breaking changes With this update, when using the Bundles resolver, you can no longer specify the serviceAccount parameter. Instead, you can specify the secret parameter to provide the name of a secret containing authentication information for the registry. You must update any tasks or pipelines that use the serviceAccount parameter of the Bundles resolver to use the secret parameter instead. The pipeline.bundles-resolver-config.default-service-account spec in the TektonConfig CR is no longer supported. 1.4.3. Known issues The tkn pipeline logs -f command does not display the logs of tasks that were defined in a pipeline with the retries: X parameter while this pipeline is in progress. 1.4.4. Fixed issues Before this update, when using GitHub Enterprise, an incoming webhook did not work. With this update, you can use incoming webhooks with GitHub Enterprise. Before this update, if a task run or pipeline run disabled timeouts, OpenShift Pipelines would run a series of rapid reconciliations on the task run or pipeline run, degrading the performance of the controller. With this update, the controller reconciles task runs and pipeline runs with disabled timeouts normally. Before this update, if you used a custom namespace to install Tekton Hub, the installation deleted the openshift-pipelines namespace, removing the OpenShift Pipelines installation. With this update, you can use a custom namespace to install Tekton Hub and your OpenShift Pipelines installation is unaffected Before this update, when using Pipelines as Code with GitLab, if the user triggered a pipeline run by using a comment in a merge request such as /test , Pipelines as Code did not report the status of the pipeline run on the merge request. With this update, Pipelines as Code correctly reports the status of the pipeline run on the merge request. Before this update, when using CEL filtering in Tekton Results with subgroups, as shown in the following example, the subgroups did not work correctly. With this update, subgroups work correctly. Example CEL filter with a subgroup Before this update, when a pipeline run was cancelled, Tekton Results did not record the logs of this pipeline run. With this update, Tekton Results records the logs of a cancelled pipeline run. 1.4.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.14.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.14.1 is available on OpenShift Container Platform 4.12 and later versions. 1.4.5.1. Fixed issues Before this update, when using multiple Pipelines as Code controllers configured with different GitHub apps, the Pipelines as Code watcher component crashed with a nilerror message. With this update, Pipelines as Code functions normally with multiple controllers configured with different GitHub apps. 1.4.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.14.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.14.2 is available on OpenShift Container Platform 4.12 and later versions. 1.4.6.1. Fixed issues Before this update, when you started a pipeline run using Pipelines as Code, Tekton Results did not store information about this pipeline run. Because of this issue, the web console plugin did not include the pipeline run in the execution statistics display. With this update, Tekton Results stores information about Pipelines as Code pipeline runs and these pipeline runs are included in the execution statistics display. Before this update, when you started many pipeline runs using Pipelines as Code at the same time and these pipelines runs included a max-keep-run annotation, the Pipelines as Code watcher component failed to process some of the pending pipeline runs and they remained in a pending state. With this update, Pipelines as Code pipeline runs are processed correctly. 1.4.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.14.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.14.3 is available on OpenShift Container Platform 4.12 and later versions. 1.4.7.1. Fixed issues Before this update, when you started many pipeline runs using Pipelines as Code at the same time and these pipelines runs included a max-keep-run annotation, the Pipelines as Code watcher was unable to reconcile the pipeline runs because of a race condition between deletion of existing pipeline runs and processing new pipeline runs. Because of this issue, some pipeline runs could not be processed. With this update, the Pipelines as Code watcher processes pipeline runs. Before this update, when you used the tkn pr logs -f command to view the logs for a running pipeline, the command line utility stopped responding, even if the pipeline run completed successfully. With this update, the tkn pr logs -f command properly displays the log information and exits. 1.4.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.14.4 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.14.4 is available on OpenShift Container Platform 4.12 and later versions. 1.4.8.1. Fixed issues Before this update, a large number of error messages referencing tekton-pipelines-webhook.ConversionWebhook could be logged. With this update, unneeded conversion webhook configuration for the ClusterTask and StepAction Custom Resource Definitions (CRDs) was removed, and such error messages are no longer logged. Before this update, some configuration settings using the options sections in the TektonConfig custom resource (CR) did not work, because a race condition would occur if the same setting was configured in both the options section and another field in the TektonConfig CR. With this update, the settings work. Before this update, the OpenShift Pipelines console plugin pod did not move to the node specified using the nodeSelector , tolerations , and priorityClassName settings. With this update, the OpenShift Pipelines plugin pod moves to the correct node. Before this update, some error messages were logged in the operator controller logs without the proper context information. With this update, error messages contain the required information. Before this update, if the pipelines-scc-rolebinding rolebinding was missing or deleted in any namespace, the OpenShift Pipelines operator controller would fail to create default resources in new namespaces correctly. With this update, the controller functions correctly. Before this update, if you configured the Horizontal Pod Autoscaler (HPA) using the options section in the TektonConfig CR, any existing HPA was updated correctly but a new HPA was not created when required. With this update, Horizontal Pod Autoscaler configuration using the options section works correctly. Before this update, if a user or an OpenShift Pipelines controller used the OpenShift Pipelines API to modify a pipeline run that was in the process of being started by Pipelines as Code, Pipelines as Code could stop and the log contained "panic" messages. With this update, the pipeline being started by Pipelines as Code can be modified concurrently. Before this update, in Pipelines as Code, a concurrency limit setting of 0 was not interpreted as disabling the concurrency limit. With this update, a concurrency limit setting of 0 disables the concurrency limit. 1.4.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.14.5 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.14.5 is available on OpenShift Container Platform 4.12 and later versions. 1.4.9.1. Fixed issues Before this update, when you used the web console and clicked a pipeline in the overview page, the pipeline details page did not contain information about tasks in the pipeline. With this update, when you click a pipeline in the overview page, the pipeline details page contains the required information. Before this update, when you configured Tekton Chains to disable storing OCI artifacts by setting an empty artifacts.oci.storage value in the TektonConfig CR, the configuration did not work and Tekton Chains attempted to store the artifacts and logged a failure in the chains.tekton.dev/signed annotation. With this update, when you set an empty artifacts.oci.storage value in the TektonConfig CR, Tekton Chains does not attempt to store OCI artifacts. 1.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.13 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.13 is available on OpenShift Container Platform 4.12 and later versions. 1.5.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.13: Note Before upgrading to the Red Hat OpenShift Pipelines Operator 1.13, ensure that you installed at least the OpenShift Container Platform 4.12.19 or 4.13.1 version on your cluster. 1.5.1.1. Pipelines Before this update, the Source-to-Image (S2I) cluster tasks used a base S2I container image that was in Technology Preview. With this update, the S2I cluster tasks used a base S2I container image that is released and fully supported. With this update, optionally, you can enable a setting so that, when a task run is cancelled, OpenShift Pipelines stops the pod for the task run but does not delete the pod. To enable this setting, in the TektonConfig custom resource (CR) set the pipeline.options.configMaps.feature-flags.data.keep-pod-on-cancel spec to true and the pipeline.enable-api-fields spec to alpha . Before this update, you had to enable alpha features in order to set compute resource limits on the task level. With this update, you can use the computeResources spec for a TaskRun CR to set the resource limits for a task. With this update, when specifying a task and using the displayName parameter, you can use parameters that include the values of parameters, results, or context variables in the display name, for example, USD(params.application) , USD(tasks.scan.results.report) , USD(context.pipeline.name) . With this update, when specifying a remote pipeline or task using the hub resolver, in the version parameter you can use inequation constraints such as >=0.2.0,< 1.0.0 . With this update, when specifying a task you can use a Common Expression Language (CEL) expression in the when expression. To use this feature, you must set the pipeline.options.configMaps.feature-flags.data.enable-cel-in-whenexpression spec to true in the TektonConfig CR. With this update, when specifying a pipeline in a PipelineRun CR spec, you can reference the results produced by an inline task in a subsequent inline task. Example usage apiVersion: tekton.dev/v1 kind: Task metadata: name: uid-task spec: results: - name: uid steps: - name: uid image: alpine command: ["/bin/sh", "-c"] args: - echo "1001" | tee USD(results.uid.path) --- apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: uid-pipeline-run spec: pipelineSpec: tasks: - name: add-uid taskRef: name: uid-task - name: show-uid taskSpec: steps: - name: show-uid image: alpine command: ["/bin/sh", "-c"] args: - echo USD(tasks.add-uid.results.uid) With this update, when configuring the cluster resolver, you can set the value of the blocked-namespaces parameter to * . With this setting, only the namespaces listed in the allowed-namespaces parameter are allowed and all other namespaces are blocked. 1.5.1.2. Operator With this update, the disable-affinity-assistant feature flag has been deprecated and might be removed in a future release. Instead, in the TektonConfig CR, you can set the pipeline.options.configMaps.feature-flags.data.coschedule spec to one of the following values: workspaces : OpenShift Pipelines schedules all task runs that share the same workspace to the same node if the workspace allocates a persistent volume claim. This is the default setting. pipelineruns : OpenShift Pipelines schedules all task runs in a pipeline run to the same node. isolate-pipelinerun : OpenShift Pipelines schedules all task runs in a pipeline run to the same node and allows only one pipeline run to run on a node at the same time. This setting might delay pipeline runs if all nodes are used for other pipeline runs. disabled : OpenShift Pipelines does not apply any specific policy about alocating task runs to nodes. 1.5.1.3. Triggers Before this update, the core interceptor always created TLS secrets when starting. With this update, the core interceptor creates TLS secrets if a TLS secret is not present on the cluster or when a certificate in the existing secret has expired. 1.5.1.4. CLI With this update, when you use the tkn bundle push command, the bundle is created with the creation time set to 1970-01-01T00:00:00Z (Unix epoch time). This change ensures that bundle images created from the same source are always identical. You can use the --ctime parameter to set the creation time in the RFC3339 format. You can also use the SOURCE_DATE_EPOCH environment variable to set the creation time. 1.5.1.5. Pipelines as Code With this update, in Pipelines as Code, when using a CEL expression for advanced event matching (pipelinesascode.tekton.dev/on-cel-expression), you can use the header and body fields to access the full payload that is passed by the Git repository provider body. You can use this feature to filter events by any information that the Git repository sends. Important Using the header and body of the payload in CEL expressions for event matching is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, when a Pipelines as Code pipeline run is triggered by a push event, you can use /test , /test branch:<branchname> , /retest , /retest branch:<branchname> , /cancel , and /cancel branch:<branchname> commands on the corresponding commit comment to re-run or cancel the pipeline run. With this update, when using Pipelines as Code, you can use remote tasks on remote pipelines. Therefore, you can reuse a complete remote pipeline across multiple repositories. You can override tasks from the remote pipeline by adding a task with the same name. Important Using remote tasks on remote pipelines and overriding tasks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, when using Pipelines as Code, you can view improved information about CI pipeline runs in the Git repository provider. This information includes the namespace and the associated pipeline run. 1.5.2. Breaking changes Before this update, in Pipelines as Code, when using a policy group, users who were not a part of policy groups and not allowed explicitly but allowed to run the CI (via org ownership or otherwise) could sometimes execute pipeline runs by creating events such as pull requests or by entering commands such as ok_to_test . With this update, if policy groups are configured, only users that are added to the required policy groups can execute pipeline runs, and users that are a part of the owner organization but not configured in policy groups cannot execute pipeline runs. 1.5.3. Known issues To enable keeping pods when a task run is cancelled, along with setting the pipeline.options.configMaps.feature-flags.data.keep-pod-on-cancel spec to true in the TektonConfig CR, you also need to set the pipeline.enable-api-fields spec to alpha in the TektonConfig CR. If you enable keeping pods when a task run is cancelled, when a task run is cancelled because of a default timeout or because you set a timeput in the pipeline specification, OpenShift Pipelines deletes the pod. 1.5.4. Fixed issues Before this update, a secret that a pipeline run uses for Git authentication could be deleted from the cluster during a cleanup. With this update, a secret is deleted only when all pipeline runs that use it are deleted. Before this update, in cases where multiple secrets shared the same prefix and were logged using the git interface, sometimes the concealing process started with a shorter secret, and a part of a longer secret could be displayed in the logs. With this update, when concealing secrets in logs, the process now starts from the longest secret, ensuring that no part of any secret is displayed in the logs. Before this update, if you specified a results spec for a pipeline, the pipeline run could wrongly fail with a mismatched types error. With this update, if you specify a results spec for a pipeline, the results provided by the pipeline are correctly processed. Before this update, when Tekton Chains was configured with KMS as Hashicorp Vault, the pod started crashing if there was an underlying error while connecting to Vault. This has now been fixed and the error is now recorded in the Tekton Chains controller log. Before this update, when using Tekton Chains, if you configured the storage.oci.repository parameter, errors were reported in the Tekton Chains controller log. With this update, the storage.oci.repository parameter is processed correctly. Before this update, when Tekton Chains was configured with the Hashicorp Vault KMS and there was an issue with the connection to Vault, the Tekton Chains controller pod could crash. With this update, the errors are processed and logged on the Tekton Chains controller log. 1.5.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.13.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.13.1 is available on OpenShift Container Platform 4.12 and later versions. 1.5.5.1. Fixed issues Before this update, a task run sometimes failed with a cannot stop the sidecar error message. With this update, the race condition between controllers that caused this failure is fixed. Before this update, to enable keeping pods when a task run is cancelled, along with setting the pipeline.options.configMaps.feature-flags.data.keep-pod-on-cancel spec to true in the TektonConfig CR, you also need to set the pipeline.enable-api-fields spec to alpha in the TektonConfig CR. With this update, setting the pipeline.options.configMaps.feature-flags.data.keep-pod-on-cancel spec to true in the TektonConfig CR enables keeping pods when a task run is cancelled, and no additional setting is necessary. Before this update, if you defined a sidecar for a task, OpenShift Pipelines did not validate the container image in the definition when creating the Task and TaskRun custom resources (CRs). At run time, a sidecar with an invalid container image caused the task run to fail. With this update, OpenShift Pipelines validates the container image in the sidecar definition when creating the Task and TaskRun CRs. Before this update, the OpenShift Pipelines controller sometimes crashed when the task was evaluating parameters. With this update, the controller no longer crashes. Before this update, if the final task in a pipeline run failed or was skipped, OpenShift Pipelines sometimes reported a validation error for the pipeline run. With this update, OpenShift Pipelines reports the status of the pipeline run correctly. 1.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.12 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.12 is available on OpenShift Container Platform 4.12 and later versions. 1.6.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.12: Note Before upgrading to the Red Hat OpenShift Pipelines Operator 1.12, ensure that you have at least installed the OpenShift Container Platform 4.12.19 or 4.13.1 version on your cluster. 1.6.1.1. Pipelines With this update, the web console includes a new gauge metric for pipeline runs and task runs. This metric indicates whether the underlying pods are being throttled by OpenShift Container Platform either because of resource quota policies defined in the namespace or because of resource constraints on the underlying node. With this update, the new set-security-context feature flag is set to true by default, in order to enable task runs and pipeline runs to run in namespaces with restricted pod security admission policies. With this update, the enable-api-fields flag is set to beta by default. You can use all features that are marked as beta in the code without further changes. With this update, the results.tekton.dev/* and chains.tekton.dev/* reserved annotations are not passed from the pipeline run to the task runs that it creates. Before this update, CSI volumes and projected volumes were not enabled by default. With this update, you can use CSI volumes and projected volumes in your pipelines without changing any configuration fields. With this update, the isolated workspaces feature is enabled by default. You can use this feature to share a workspace with specified steps and sidecars without sharing it with the entire task run. 1.6.1.2. Operator With this update, you can configure the default security context constraint (SCC) for the pods that OpenShift Pipelines creates for pipeline runs and task runs. You can set the SCC separately for different namespaces and also configure the maximum (least restrictive) SCC that can be set for any namespace. With this update, a new options: heading is available under each component in the TektonConfig spec. You can use parameters under this headings to control settings for different components. In particular, you can use parameters under the platforms.openshift.pipelinesAsCode.options.configMaps.pac-config-logging.data spec to set logging levels for components of Pipelines as Code. With this update, you can use the new spec.pipeline.performance.replicas parameter to set the number of replicas that are created for the OpenShift Pipelines controller pod. If you previously set the replica counts in your deployment manually, you must now use this setting to control the replica counts. With this update, the Operator ensures that the stored API version remains the same throughout your deployment of OpenShift Pipelines. The stored API version in OpenShift Pipelines 1.12 is v1 . With this update, you can use a secret to configure S3 bucket storage to store Tekton Results logging information. When configuring S3 bucket storage, you must provide the secret with the S3 storage credentials by using the new secret_name spec in the TektonResult custom resource (CR). 1.6.1.3. Tekton Results Important Tekton Results is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, you can configure Tekton Results to store data in an external PostgreSQL server. With this update, you can use Google Cloud Storage (GCS) to store Tekton Results logging information. You can provide the secret with the GCS storage credentials and then provide the secret name, secret key, and bucket name in properties under the TektonResult spec. You can also use Workload Identity Federation for authentication. With this update, any service account authenticated with OpenShift Pipelines can access the TektonResult CR. With this update, Tekton Results includes cluster role aggregation for service accounts with admin, edit, and view roles. Cluster role binding is no longer required for these service accounts to access results and records using the Tekton Results API. With this update, you can configure pruning for each PipelineRun or TaskRun resource by setting a prune-per-resource boolean field in the TektonConfig CR. You can also configure pruning for each PipelineRun or TaskRun resource in a namespace by adding the operator.tekton.dev/prune.prune-per-resource=true annotation to that namespace. With this update, if there are any changes in the OpenShift Container Platform cluster-wide proxy, Operator Lifecycle Manager (OLM) recreates the Red Hat OpenShift Pipelines Operator. With this update, you can disable the pruner feature by setting the value of the config.pruner.disabled field to true in the TektonConfig CR. 1.6.1.4. Triggers With this update, you can configure readiness and liveness probes on Trigger CRs. You can also set the value of the failure threshold for the probes; the default value is 3. With this update, OpenShift Pipelines triggers add Type and Subject values when creating a response to a Cloud Events request. 1.6.1.5. CLI With this update, the tkn pipeline logs command displays the logs of a pipeline or task that is referenced using a resolver. With this update, when entering the tkn bundle push command, you can use the --annotate flag to provide additional annotations. 1.6.1.6. Pipelines as Code With this update, a Pipelines as Code pipeline run can include remote tasks fetched from multiple Artifact Hub or Tekton Hub instances and from different catalogs in the same hub instance. With this update, you can use parameters under the platforms.openshift.pipelinesAsCode.options.configMaps.pac-config-logging.data spec in the TektonConfig CR to set logging levels for Pipelines as Code components. With this update, you can set policies that allow certain actions only to members of a team and reject the actions when other users request them. Currently, the pull_request and ok_to_test actions support setting such policies. With this update, you can pass arbitrary parameters in the incoming webhook as a JSON payload. OpenShift Pipelines passes these parameters to the pipeline run. To provide an additional security layer, you must explicitly define the permitted parameters in the Repository CR. With this update, matching a large set of pipeline runs with a large number of remote annotations in Pipelines as Code is optimized. Pipelines as Code fetches the remote tasks only for the pipeline run that has matched. With this update, you can use the source_url variable in a pipeline run template to retrieve information about the forked repository from where the event, such as a pull or push request, is triggered. Example usage apiVersion: tekton.dev/v1beta1 kind: PipelineRun # ... spec: params: - name: source_url value: "{{ source_url }}" pipelineSpec: params: - name: source_url # ... With this update, if an authorized user provides an ok-to-test comment to trigger a pipeline run on the pull request from an unauthorized user and then the author makes further changes to the branch, Pipelines as Code triggers the pipelines. To disable triggering the pipeline until an authorized user provides a new ok-to-test comment, set the pipelinesAsCode.settings.remember-ok-to-test spec in the TektonConfig CR to false. With this update, on the GitHub status check page, the table that shows the status of all tasks includes the display name of every task. With this update, you can configure the tags push event in a pipeline run on GitLab. With this update, you can use the target_url and source_url fields in Pipelines as Code Common Expression Language (CEL) expression filtering annotations to filter the request for a specific target or source. With this update, when you configure fetching a remote GitHub URL using a token, you can include a branch name that contains a slash. You must encode the slash within the branch name as %2F to ensure proper parsing by Pipelines as Code, as in the following example URL: https://github.com/organization/repository/blob/feature%2Fmainbranch/path/file . In this example URL, the branch name is feature/mainbranch and the name of the file to fetch is /path/file . With this update, you can use --v1beta1 flag in the tkn pac resolve command. Use this flag if the pipeline run is generated with the v1beta1 API version schema. 1.6.2. Breaking changes With this update, you cannot use the openshift-operators namespace as the target namespace for installing OpenShift Pipelines. If you used the openshift-operators namespace as the target namespace, change the target namespace before upgrading to Red Hat OpenShift Pipelines Operator version 1.12. Otherwise, after the upgrade, you will not be able to change any configuration settings in the TektonConfig CR except the targetNamespace setting. With this update, the new spec.pipeline.performance.replicas parameter controls the number of replicas that is created for every pod for a pipeline run or task run. If you previously set the replica counts in your deployment manually, after upgrading to OpenShift Pipelines version 1.12 you must use this parameter to control the replica counts. With this update, the following parameters are no longer supported in the TektonResult CR: db_user db_password s3_bucket_name s3_endpoint s3_hostname_immutable s3_region s3_access_key_id s3_secret_access_key s3_multi_part_size You must provide these parameters using secrets. After upgrading to OpenShift Pipelines version 1.12, you must delete and re-create the TektonResult CR to provide these parameters. With this update, the tkn pac bootstrap command supports the --github-hostname flag. The --github-api-url flag is deprecated. 1.6.3. Known issues If limit ranges are configured for a namespace, but pod ephemeral storage is not configured in the limit ranges, pods can go into an error stage with the message Pod ephemeral local storage usage exceeds the total limit of containers 0 . If you want to make changes to the configuration in the TektonResult CR, you must delete the existing TektonResult CR and then create a new one. If you change an existing TektonResult CR, the changes are not applied to the existing deployment of Tekton Results. For example, if you change the connection from an internal database server to an external one or vice versa, the API remains connected to the old database. 1.6.4. Fixed issues Before this update, Pipelines as Code ran pipeline runs based only on branch base names, and could incorrectly trigger pipeline runs with the same base name but different branch name. With this update, Pipelines as Code checks both the base name and the exact branch name of a pipeline run. Before this update, an incoming webhook event could trigger multiple pipeline runs configured for other events. With this update, an incoming webhook event triggers only a pipeline run configured for the webhook event. With this update, the pac-gitauth secrets are now explicitly deleted when cleaning up a pipeline run, in case the ownerRef on the pipeline run gets removed. Before this update, when a task in a pipeline run failed with a reason message, the entire pipeline run failed with a PipelineValidationFailed reason. With this update, the pipeline run fails with the same reason message as the task that failed. Before this update, the disable-ha flag value was not correctly passed to the Pipelines controller, and the high availability (HA) functionality was never enabled. With this update, you can enable the HA functionality by setting the value of the disable-ha flag in the TektonConfig CR to false . Before this update, the skopeo-copy cluster task would fail when attempting to copy images mentioned in config map data. With this update, the skopeo-copy cluster task completes properly. With this update, a pipeline run automatically generated by the tkn pac generate -language=java command has correct annotations and parameter names. Before this update, only a user with the administrative permissions could successfully run the tkn pac create repository command. With this update, any authorized user can run the tkn pac create repository command. Before this update, the /test <run-name> and /retest <run-name> user comments, which specified a particular pipeline, did not trigger pipeline runs as expected. With this update, these comments trigger pipeline runs successfully. Before this update, if there were multiple pipeline runs in the .tekton folder with the generateName field and not the Name field, the pipeline runs failed. This update fixes the issue. Before this update, in Pipelines as Code when using GitLab, a pipeline run was triggered by any event in a merge request, including adding labels and setting status. With this update, the pipeline run is triggered only when there is an open, reopen, or push event. A comment containing the status of the checks is now posted on the merge request. Before this update, while a pipeline run was waiting for approval, the status of the check could be displayed as skipped in the checks section of GitHub and Gitea pull requests. With this update, the correct pending approval status is displayed. Before this update, the bundles resolver sometimes set the type to Task when attempting to retrieve a pipeline, leading to errors in retrieval. With this update, the resolver uses the correct type to retrieve a pipeline. This update fixes an error in processing the Common Expression Language (CEL) NOT operator when querying Tekton Results. This update fixed a 404 error response that was produced in the Tekton Results API when a LIST operation for records was requested and the specified result was - . Before this update, in an EventListener object, the status.address.url field was always set to the default port. With this update, the status.address.url field is set to match the port specified in the spec.resources.kubernetesresource.serviceport parameter. Before this update, if the GitHub API provided a paginated response, Pipelines as Code used only the first page of the response. With this update, all paginated responses are processed fully. Before this update, the Tekton Chains controller crashed when the host address of KMS Hashicorp Vault was configured incorrectly or when Tekton Chains was unable to connect to the KMS Hashicorp Vault. With this update, Tekton Chains logs the connection error and does not crash. 1.6.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.12.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.12.1 is available on OpenShift Container Platform 4.12 and later versions. 1.6.5.1. Fixed issues Before this update, if you configured Pipelines as Code with the custom console driver to output to a custom console, the Pipelines as Code controller crashed in certain cases. After you pushed changes to a pull request, the CI status check for this pull request could remain as waiting for status to be reported and the associated pipeline run did not complete. With this update, the Pipelines as Code controller operates normally. After you push changes to a pull request, the associated pipeline run completes normally and the CI status check for the pull request is updated. Before this update, when using Pipelines as Code, if you created an access policy on the Repository custom resource (CR) that did not include a particular user and then added the user to the OWNER file in the Git repository, the user would have no rights for the Pipelines as Code CI process. For example, if the user created a pull request into the Git repository. the CI process would not run on this pull request automatically. With this update, a user who is not included in the access policy on the Repository CR but is included in the OWNER file is allowed to run the CI process for the repository. With this update, the HTTP/2.0 protocol is not supported for webhooks. All webhook calls to Red Hat OpenShift Pipelines must use the HTTP/1.1 protocol. 1.6.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.12.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.12.2 is available on OpenShift Container Platform 4.12 and later versions. 1.6.6.1. Fixed issues Before this update, the generated Git secret for the latest pipeline run was deleted when the max-keep-runs parameter was exceeded. With this update, the Git secret is no longer deleted on the latest pipeline run. With this update, the S2I cluster task uses a General Availability container image. 1.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.11 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.11 is available on OpenShift Container Platform 4.12 and later versions. 1.7.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.11: Note Before upgrading to the Red Hat OpenShift Pipelines Operator 1.11, ensure that you have at least installed the OpenShift Container Platform 4.12.19 or 4.13.1 version on your cluster. 1.7.1.1. Pipelines With this update, you can use Red Hat OpenShift Pipelines on the OpenShift Container Platform cluster that runs on ARM hardware. You have support for the ClusterTask resources where images are available and the Tekton CLI tool on ARM hardware. This update adds support for results, object parameters, array results, and indexing into an array when you set the enable-api-fields feature flag to beta value in the TektonConfig CR. With this update, propagated parameters are now part of a stable feature. This feature enables interpolating parameters in embedded specifications to reduce verbosity in Tekton resources. With this update, propagated workspaces are now part of a stable feature. You can enable the propagated workspaces feature by setting the enable-api-fields feature flag to alpha or beta value. With this update, the TaskRun object fetches and displays the init container failure message to users when a pod fails to run. With this update, you can replace parameters, results, and the context of a pipeline task while configuring a matrix as per the following guidelines: Replace an array with an array parameter or a string with a string , array , or object parameter in the matrix.params configuration. Replace a string with a string , array , or object parameter in the matrix.include configuration. Replace the context of a pipeline task with another context in the matrix.include configuration. With this update, the TaskRun resource validation process also validates the matrix.include parameters. The validation checks whether all parameters have values and match the specified type, and object parameters have all the keys required. This update adds a new default-resolver-type field in the default-configs config map. You can set the value of this field to configure a default resolver. With this update, you can define and use a PipelineRun context variable in the pipelineRun.workspaces.subPath configuration. With this update, the ClusterResolver , BundleResolver , HubResolver , and GitResolver features are now available by default. 1.7.1.2. Triggers With this update, Tekton Triggers support the Affinity and TopologySpreadConstraints values in the EventListener specification. You can use these values to configure Kubernetes and custom resources for an EventListener object. This update adds a Slack interceptor that allows you to extract fields by using a slash command in Slack. The extracted fields are sent in the form data section of an HTTP request. 1.7.1.3. Operator With this update, you can configure pruning for each PipelineRun or TaskRun resource by setting a prune-per-resource boolean field in the TektonConfig CR. You can also configure pruning for each PipelineRun or TaskRun resource in a namespace by adding the operator.tekton.dev/prune.prune-per-resource=true annotation to that namespace. With this update, if there are any changes in the OpenShift Container Platform cluster-wide proxy, Operator Lifecycle Manager (OLM) recreates the Red Hat OpenShift Pipelines Operator. With this update, you can disable the pruner feature by setting the value of the config.pruner.disabled field to true in the TektonConfig CR. 1.7.1.4. Tekton Chains With this update, Tekton Chains is now generally available for use. With this update, you can use the skopeo tool with Tekton Chains to generate keys, which are used in the cosign signing scheme. When you upgrade to the Red Hat OpenShift Pipelines Operator 1.11, the Tekton Chains configuration will be overwritten and you must set it again in the TektonConfig CR. 1.7.1.5. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update adds a new resource/<catalog_name>/<kind>/<resource_name>/raw endpoint and a new resourceURLPath field in the resource API response. This update helps you to obtain the latest raw YAML file of the resource. 1.7.1.6. Tekton Results Important Tekton Results is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update adds Tekton Results to the Tekton Operator as an optional component. 1.7.1.7. Pipelines as Code With this update, Pipelines as Code allows you to expand a custom parameter within your PipelineRun resource by using the params field. You can specify a value for the custom parameter inside the template of the Repository CR. The specified value replaces the custom parameter in your pipeline run. Also, you can define a custom parameter and use its expansion only when specified conditions are compatible with a Common Expression Language (CEL) filter. With this update, you can either rerun a specific pipeline or all pipelines by clicking the Re-run all checks button in the Checks tab of the GitHub interface. This update adds a new tkn pac info command to the Pipelines as Code CLI. As an administrator, you can use the tkn pac info command to obtain the following details about the Pipelines as Code installation: The location where Pipelines as Code is installed. The version number of Pipelines as Code. An overview of the Repository CR created on the cluster and the URL associated with the repository. Details of any installed GitHub applications. With this command, you can also specify a custom GitHub API URL by using the --github-api-url argument. This update enables error detection for all PipelineRun resources by default. Pipelines as Code detects if a PipelineRun resource execution has failed and shows a snippet of the last few lines of the error. For a GitHub application, Pipelines as Code detects error messages in the container logs and exposes them as annotations on a pull request. With this update, you can fetch tasks from a private Tekton Hub instance attached to a private Git repository. To enable this update, Pipelines as Code uses the internal raw URL of the private Tekton Hub instance instead of using the GitHub raw URL. Before this update, Pipelines as Code provided logs that would not include the namespace detail. With this update, Pipelines as Code adds the namespace information to the pipeline logs so that you can filter them based on a namespace and debug easily. With this update, you can define the provenance source from where the PipelineRun resource definition is to be fetched. By default, Pipelines as Code fetches the PipelineRun resource definition from the branch where the event has been triggered. Now, you can configure the value of the pipelinerun_provenance setting to default_branch so that the PipelineRun resource definition is fetched from the default branch of the repository as configured on GitHub. With this update, you can extend the scope of the GitHub token at the following levels: Repository-level: Use this level to extend the scope to the repositories that exist in the same namespace in which the original repository exists. Global-level: Use this level to extend the scope to the repositories that exist in a different namespace. With this update, Pipelines as Code triggers a CI pipeline for a pull request created by a user who is not an owner, collaborator, or public member or is not listed in the owner file but has permission to push changes to the repository. With this update, the custom console setting allows you to use custom parameters from a Repository CR. With this update, Pipelines as Code changes all PipelineRun labels to PipelineRun annotations. You can use a PipelineRun annotation to mark a Tekton resource, instead of using a PipelineRun label. With this update, you can use the pac-config-logging config map for watcher and webhook resources, but not for the Pipelines as Code controller. 1.7.2. Breaking changes This update replaces the resource-verification-mode feature flag with a new trusted-resources-verification-no-match-policy flag in the pipeline specification. With this update, you cannot edit the Tekton Chains CR. Instead, edit the TektonConfig CR to configure Tekton Chains. 1.7.3. Deprecated and removed features This update removes support for the PipelineResource commands and references from Tekton CLI: Removal of pipeline resources from cluster tasks Removal of pipeline resources from tasks Removal of pipeline resources from pipelines Removal of resource commands Removal of input and output resources from the clustertask describe command This update removes support for the full embedded status from Tekton CLI. The taskref.bundle and pipelineref.bundle bundles are deprecated and will be removed in a future release. In Red Hat OpenShift Pipelines 1.11, support for the PipelineResource CR has been removed, use the Task CR instead. In Red Hat OpenShift Pipelines 1.11, support for the v1alpha1.Run objects has been removed. You must upgrade the objects from v1alpha1.Run to v1beta1.CustomRun before upgrading to this release. In Red Hat OpenShift Pipelines 1.11, the custom-task-version feature flag has been removed. In Red Hat OpenShift Pipelines 1.11, the pipelinerun.status.taskRuns and pipelinerun.status.runs fields have been removed along with the embedded-status feature flag. Use the pipelinerun.status.childReferences field instead. 1.7.4. Known issues Setting the prune-per-resource boolean field does not delete PipelineRun or TaskRun resources if they were not part of any pipeline or task. Tekton CLI does not show logs of the PipelineRun resources that are created by using resolvers. When you filter your pipeline results based on the order_by=created_time+desc&page_size=1 query, you get zero records without any nextPageToken value in the output. When you set the value of the loglevel.pipelinesascode field to debug , no debugging logs are generated in the Pipelines as Code controller pod. As a workaround, restart the Pipelines as Code controller pod. 1.7.5. Fixed issues Before this update, Pipelines as Code failed to create a PipelineRun resource while detecting the generateName field in the PipelineRun CR. With this update, Pipelines as Code supports providing the generateName field in the PipelineRun CR. Before this update, when you created a PipelineRun resource from the web console, all annotations would be copied from the pipeline, causing issues for the running nodes. This update now resolves the issue. This update fixes the tkn pr delete command for the keep flag. Now, if the value of the keep flag is equal to the number of the associated task runs or pipeline runs, then the command returns the exit code 0 along with a message. Before this update, the Tekton Operator did not expose the performance configuration fields for any customizations. With this update, as a cluster administrator, you can customize the following performance configuration fields in the TektonConfig CR based on your needs: disable-ha buckets kube-api-qps kube-api-burst threads-per-controller This update fixes the remote bundle resolver to perform a case-insensitive comparison of the kind field with the dev.tekton.image.kind annotation value in the bundle. Before this update, pods for remote resolvers were terminated because of insufficient memory when you would clone a large Git repository. This update fixes the issue and increases the memory limit for deploying remote resolvers. With this update, task and pipeline resources of v1 type are supported in remote resolution. This update reverts the removal of embedded TaskRun status from the API. The embedded TaskRun status is now available as a deprecated feature to support compatibility with older versions of the client-server. Before this update, all annotations were merged into PipelineRun and TaskRun resources even if they were not required for the execution. With this update, when you merge annotations into PipelineRun and TaskRun resources, the last-applied-configuration annotation is skipped. This update fixes a regression issue and prevents the validation of a skipped task result in pipeline results. For example, if the pipeline result references a skipped PipelineTask resource, then the pipeline result is not emitted and the PipelineRun execution does not fail due to a missing result. This update uses the pod status message to determine the cause of a pod termination. Before this update, the default resolver was not set for the execution of the finally tasks. This update sets the default resolver for the finally tasks. With this update, Red Hat OpenShift Pipelines avoids occasional failures of the TaskRun or PipelineRun execution when you use remote resolution. Before this update, a long pipeline run would be stuck in the running state on the cluster, even after the timeout. This update fixes the issue. This update fixes the tkn pr delete command for correctly using the keep flag. With this update, if the value of the keep flag equals the number of associated task runs or pipeline runs, the tkn pr delete command returns exit code 0 along with a message. 1.7.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.11.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.11.1 is available on OpenShift Container Platform 4.12 and later versions. 1.7.6.1. Fixed issues Before this update, a task run could fail with a mount path error message, when a running or pending pod was preempted. With this update, a task run does not fail when the cluster causes a pod to be deleted and re-created. Before this update, a shell script in a task had to be run as root. With this update, the shell script image has the non-root user ID set so that you can run a task that includes a shell script, such as the git-clone task, as a non-root user within the pod. Before this update, in Red Hat OpenShift Pipelines 1.11.0, when a pipeline run is defined using Pipelines as Code, the definition in the Git repository references the tekton.dev/v1beta1 API version and includes a spec.pipelineRef.bundle entry, the kind parameter for the bundle reference was wrongly set to Task . The issue did not exist in earlier versions of Red Hat OpenShift Pipelines. With this update, the kind parameter is set correctly. Before this update, the disable-ha flag was not correctly passed to the tekton-pipelines controller, so the High Availability (HA) feature of Red Hat OpenShift Pipelines could not be enabled. With this update, the disable-ha flag is correctly passed and you can enable the HA feature as required. Before this update, you could not set the URL for Tekton Hub and Artifact Hub for the hub resolver, so you could use only the preset addresses of Tekton Hub and Artifact Hub. With this update, you can configure the URL for Tekton Hub and Artifact Hub for the hub resolver, for example, to use a custom Tekton Hub instance that you installed. With this update, the SHA digest of the git-init image corresponds to version 1.10.5, which is the current released version of the image. Before this update, the tekton-pipelines-controller component used a config map named config-leader-election . This name is the default value for knative controllers, so the configuration process for OpenShift Pipelines could affect other controllers and vice versa. With this update, the component uses a unique config name, so the configuration process for OpenShift Pipelines does not affect other controllers and is not affected by other controllers. Before this update, when a user without write access to a GitHub repository opened a pull request, Pipelines as Code CI/CD actions would show as skipped in GitHub. With this update, Pipelines as Code CI/CD actions are shown as Pending approval in GitHub. Before this update, Pipelines as Code ran CI/CD actions for every pull request into a branch that matched a configured branch name. With this update, Pipelines as Code runs CI/CD actions only when the source branch of the pull request matches the exact configured branch name. Before this update, metrics for the Pipelines as Code controller were not visible in the OpenShift Container Platform developer console. With this update, metrics for the Pipelines as Code controller are displayed in the developer console. Before this update, in Red Hat OpenShift Pipelines 1.11.0, the Operator always installed Tekton Chains and you could not disable installation of the Tekton Chains component. With this update, you can set the value of the disabled parameter to true in the TektonConfig CR to disable installation okindf Tekton Chains. Before this update, if you configured Tekton Chains on an older version of OpenShift Pipelines using the TektonChain CR and then upgraded to OpenShift Pipelines version 1.11.0, the configuration information was overwritten. With this update, if you upgrade from an older version of OpenShift Pipelines and Tekton Chains was configured in the same namespace where the TektonConfig is installed ( openshift-pipelines ), Tekton Chains configuration information is preserved. 1.7.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.11.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.11.2 is available on OpenShift Container Platform 4.12 and later versions. This update includes an updated version of the tkn command line tool. You can download the updated version of this tool at the following locations: Linux (x86_64, amd64) Linux on IBM zSystems and IBM(R) LinuxONE (s390x) Linux on IBM Power (ppc64le) Linux on ARM (aarch64, arm64) Windows macOS macOS on ARM If you installed the tkn command line tool using RPM on Red Hat Enterprise Linux (RHEL), use the yum update command to install the updated version. 1.7.7.1. Fixed issues Before this update, the tkn pac resolve -f command did not detect the existing secret for authentication with the Git repository. With this update, this command successfully detects the secret. With this update, you can use --v1beta1 flag in the tkn pac resolve command. Use this flag if you want to generate the pipeline run with the v1beta1 API version schema. Before this update, the tkn pr logs command failed to display the logs for a pipeline run if this pipeline run referenced a resolver. With this update, the command displays the logs. With this update, the SHA digest of the git-init image corresponds to version 1.12.1, which is the current released version of the image With this update, the HTTP/2.0 protocol is not supported for webhooks. All webhook calls to Red Hat OpenShift Pipelines must use the HTTP/1.1 protocol. 1.7.8. Known issues If you use the Bundles resolver to define a pipeline run and then use the tkn pac resolve --v1beta1 command for this pipeline run, the command generates incorrect YAML output. The kind parameter for the bundle is set to Task in the YAML output. As a workaround, you can set the correct value in the YAML data manually. Alternatively, you can use the opc pac resolve --v1beta1 command or use the version of the tkn tool included with OpenShift Pipelines version 1.12.0 or later. 1.7.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.11.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.11.3 is available on OpenShift Container Platform 4.11 in addition to 4.12 and later versions. 1.7.9.1. Fixed issues Before this update, if the final task of a pipeline has failed or was skipped, OpenShift Pipelines reported validation errors. With this update, a pipeline can succeed even if its final task fails or is skipped. 1.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.10 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.8.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.10. 1.8.1.1. Pipelines With this update, you can specify environment variables in a PipelineRun or TaskRun pod template to override or append the variables that are configured in a task or step. Also, you can specify environment variables in a default pod template to use those variables globally for all PipelineRuns and TaskRuns . This update also adds a new default configuration named forbidden-envs to filter environment variables while propagating from pod templates. With this update, custom tasks in pipelines are enabled by default. Note To disable this update, set the enable-custom-tasks flag to false in the feature-flags config custom resource. This update supports the v1beta1.CustomRun API version for custom tasks. This update adds support for the PipelineRun reconciler to create a custom run. For example, custom TaskRuns created from PipelineRuns can now use the v1beta1.CustomRun API version instead of v1alpha1.Run , if the custom-task-version feature flag is set to v1beta1 , instead of the default value v1alpha1 . Note You need to update the custom task controller to listen for the *v1beta1.CustomRun API version instead of *v1alpha1.Run in order to respond to v1beta1.CustomRun requests. This update adds a new retries field to the v1beta1.TaskRun and v1.TaskRun specifications. 1.8.1.2. Triggers With this update, triggers support the creation of Pipelines , Tasks , PipelineRuns , and TaskRuns objects of the v1 API version along with CustomRun objects of the v1beta1 API version. With this update, GitHub Interceptor blocks a pull request trigger from being executed unless invoked by an owner or with a configurable comment by an owner. Note To enable or disable this update, set the value of the githubOwners parameter to true or false in the GitHub Interceptor configuration file. With this update, GitHub Interceptor has the ability to add a comma delimited list of all files that have changed for the push and pull request events. The list of changed files is added to the changed_files property of the event payload in the top-level extensions field. This update changes the MinVersion of TLS to tls.VersionTLS12 so that triggers run on OpenShift Container Platform when the Federal Information Processing Standards (FIPS) mode is enabled. 1.8.1.3. CLI This update adds support to pass a Container Storage Interface (CSI) file as a workspace at the time of starting a Task , ClusterTask or Pipeline . This update adds v1 API support to all CLI commands associated with task, pipeline, pipeline run, and task run resources. Tekton CLI works with both v1beta1 and v1 APIs for these resources. This update adds support for an object type parameter in the start and describe commands. 1.8.1.4. Operator This update adds a default-forbidden-env parameter in optional pipeline properties. The parameter includes forbidden environment variables that should not be propagated if provided through pod templates. This update adds support for custom logos in Tekton Hub UI. To add a custom logo, set the value of the customLogo parameter to base64 encoded URI of logo in the Tekton Hub CR. This update increments the version number of the git-clone task to 0.9. 1.8.1.5. Tekton Chains Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update adds annotations and labels to the PipelineRun and TaskRun attestations. This update adds a new format named slsa/v1 , which generates the same provenance as the one generated when requesting in the in-toto format. With this update, Sigstore features are moved out from the experimental features. With this update, the predicate.materials function includes image URI and digest information from all steps and sidecars for a TaskRun object. 1.8.1.6. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update supports installing, upgrading, or downgrading Tekton resources of the v1 API version on the cluster. This update supports adding a custom logo in place of the Tekton Hub logo in UI. This update extends the tkn hub install command functionality by adding a --type artifact flag, which fetches resources from the Artifact Hub and installs them on your cluster. This update adds support tier, catalog, and org information as labels to the resources being installed from Artifact Hub to your cluster. 1.8.1.7. Pipelines as Code This update enhances incoming webhook support. For a GitHub application installed on the OpenShift Container Platform cluster, you do not need to provide the git_provider specification for an incoming webhook. Instead, Pipelines as Code detects the secret and use it for the incoming webhook. With this update, you can use the same token to fetch remote tasks from the same host on GitHub with a non-default branch. With this update, Pipelines as Code supports Tekton v1 templates. You can have v1 and v1beta1 templates, which Pipelines as Code reads for PR generation. The PR is created as v1 on cluster. Before this update, OpenShift console UI would use a hardcoded pipeline run template as a fallback template when a runtime template was not found in the OpenShift namespace. This update in the pipelines-as-code config map provides a new default pipeline run template named, pipelines-as-code-template-default for the console to use. With this update, Pipelines as Code supports Tekton Pipelines 0.44.0 minimal status. With this update, Pipelines as Code supports Tekton v1 API, which means Pipelines as Code is now compatible with Tekton v0.44 and later. With this update, you can configure custom console dashboards in addition to configuring a console for OpenShift and Tekton dashboards for k8s. With this update, Pipelines as Code detects the installation of a GitHub application initiated using the tkn pac create repo command and does not require a GitHub webhook if it was installed globally. Before this update, if there was an error on a PipelineRun execution and not on the tasks attached to PipelineRun , Pipelines as Code would not report the failure properly. With this update, Pipelines as Code reports the error properly on the GitHub checks when a PipelineRun could not be created. With this update, Pipelines as Code includes a target_namespace variable, which expands to the currently running namespace where the PipelineRun is executed. With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application. With this update, Pipelines as Code does not report errors when the repository CR was not found. With this update, Pipelines as Code reports an error if multiple pipeline runs with the same name were found. 1.8.2. Breaking changes With this update, the prior version of the tkn command is not compatible with Red Hat OpenShift Pipelines 1.10. This update removes support for Cluster and CloudEvent pipeline resources from Tekton CLI. You cannot create pipeline resources by using the tkn pipelineresource create command. Also, pipeline resources are no longer supported in the start command of a task, cluster task, or pipeline. This update removes tekton as a provenance format from Tekton Chains. 1.8.3. Deprecated and removed features In Red Hat OpenShift Pipelines 1.10, the ClusterTask commands are now deprecated and are planned to be removed in a future release. The tkn task create command is also deprecated with this update. In Red Hat OpenShift Pipelines 1.10, the flags -i and -o that were used with the tkn task start command are now deprecated because the v1 API does not support pipeline resources. In Red Hat OpenShift Pipelines 1.10, the flag -r that was used with the tkn pipeline start command is deprecated because the v1 API does not support pipeline resources. The Red Hat OpenShift Pipelines 1.10 update sets the openshiftDefaultEmbeddedStatus parameter to both with full and minimal embedded status. The flag to change the default embedded status is also deprecated and will be removed. In addition, the pipeline default embedded status will be changed to minimal in a future release. 1.8.4. Known issues This update includes the following backward incompatible changes: Removal of the PipelineResources cluster Removal of the PipelineResources cloud event If the pipelines metrics feature does not work after a cluster upgrade, run the following command as a workaround: USD oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print USD1}' | xargs oc delete tektoninstallersets With this update, usage of external databases, such as the Crunchy PostgreSQL is not supported on IBM Power, IBM zSystems, and IBM(R) LinuxONE. Instead, use the default Tekton Hub database. 1.8.5. Fixed issues Before this update, the opc pac command generated a runtime error instead of showing any help. This update fixes the opc pac command to show the help message. Before this update, running the tkn pac create repo command needed the webhook details for creating a repository. With this update, the tkn-pac create repo command does not configure a webhook when your GitHub application is installed. Before this update, Pipelines as Code would not report a pipeline run creation error when Tekton Pipelines had issues creating the PipelineRun resource. For example, a non-existing task in a pipeline run would show no status. With this update, Pipelines as Code shows the proper error message coming from Tekton Pipelines along with the task that is missing. This update fixes UI page redirection after a successful authentication. Now, you are redirected to the same page where you had attempted to log in to Tekton Hub. This update fixes the list command with these flags, --all-namespaces and --output=yaml , for a cluster task, an individual task, and a pipeline. This update removes the forward slash in the end of the repo.spec.url URL so that it matches the URL coming from GitHub. Before this update, the marshalJSON function would not marshal a list of objects. With this update, the marshalJSON function marshals the list of objects. With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application. This update fixes the GitHub collaborator check when your repository has more than 100 users. With this update, the sign and verify commands for a task or pipeline now work without a kubernetes configuration file. With this update, Tekton Operator cleans leftover pruner cron jobs if pruner has been skipped on a namespace. Before this update, the API ConfigMap object would not be updated with a user configured value for a catalog refresh interval. This update fixes the CATALOG_REFRESH_INTERVAL API in the Tekon Hub CR. This update fixes reconciling of PipelineRunStatus when changing the EmbeddedStatus feature flag. This update resets the following parameters: The status.runs and status.taskruns parameters to nil with minimal EmbeddedStatus The status.childReferences parameter to nil with full EmbeddedStatus This update adds a conversion configuration to the ResolutionRequest CRD. This update properly configures conversion from the v1alpha1.ResolutionRequest request to the v1beta1.ResolutionRequest request. This update checks for duplicate workspaces associated with a pipeline task. This update fixes the default value for enabling resolvers in the code. This update fixes TaskRef and PipelineRef names conversion by using a resolver. 1.8.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.8.6.1. Fixed issues for Pipelines as Code Before this update, if the source branch information coming from payload included refs/heads/ but the user-configured target branch only included the branch name, main , in a CEL expression, the push request would fail. With this update, Pipelines as Code passes the push request and triggers a pipeline if either the base branch or target branch has refs/heads/ in the payload. Before this update, when a PipelineRun object could not be created, the error received from the Tekton controller was not reported to the user. With this update, Pipelines as Code reports the error messages to the GitHub interface so that users can troubleshoot the errors. Pipelines as Code also reports the errors that occurred during pipeline execution. With this update, Pipelines as Code does not echo a secret to the GitHub checks interface when it failed to create the secret on the OpenShift Container Platform cluster because of an infrastructure issue. This update removes the deprecated APIs that are no longer in use from Red Hat OpenShift Pipelines. 1.8.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.8.7.1. Fixed issues Before this update, an issue in the Tekton Operator prevented the user from setting the value of the enable-api-fields flag to beta . This update fixes the issue. Now, you can set the value of the enable-api-fields flag to beta in the TektonConfig CR. 1.8.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.3 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.8.8.1. Fixed issues Before this update, the Tekton Operator did not expose the performance configuration fields for any customizations. With this update, as a cluster administrator, you can customize the following performance configuration fields in the TektonConfig CR based on your needs: disable-ha buckets kube-api-qps kube-api-burst threads-per-controller 1.8.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.4 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.4 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.8.9.1. Fixed issues This update fixes the bundle resolver conversion issue for the PipelineRef field in a pipeline run. Now, the conversion feature sets the value of the kind field to Pipeline after conversion. Before this update, the pipelinerun.timeouts field was reset to the timeouts.pipeline value, ignoring the timeouts.tasks and timeouts.finally values. This update fixes the issue and sets the correct default timeout value for a PipelineRun resource. Before this update, the controller logs contained unnecessary data. This update fixes the issue. 1.8.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.5 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.5 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13. Important Red Hat OpenShift Pipelines 1.10.5 is only available in the pipelines-1.10 channel on OpenShift Container Platform 4.10, 4.11, 4.12, and 4.13. It is not available in the latest channel for any OpenShift Container Platform version. 1.8.10.1. Fixed issues Before this update, huge pipeline runs were not getting listed or deleted using the oc and tkn commands. This update mitigates this issue by compressing the huge annotations that were causing this problem. Remember that if the pipeline runs are still too huge after compression, then the same error still recurs. Before this update, only the pod template specified in the pipelineRun.spec.taskRunSpecs[].podTemplate object would be considered for a pipeline run. With this update, the pod template specified in the pipelineRun.spec.podTemplate object is also considered and merged with the template specified in the pipelineRun.spec.taskRunSpecs[].podTemplate object. 1.8.11. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.6 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.6 is available on OpenShift Container Platform 4.10, 4.11, 4.12, and 4.13. This update includes an updated version of the tkn command line tool. You can download the updated version of this tool at the following locations: Linux (x86_64, amd64) Linux on IBM zSystems and IBM(R) LinuxONE (s390x) Linux on IBM Power (ppc64le) Linux on ARM (aarch64, arm64) Windows macOS macOS on ARM If you installed the tkn command line tool using RPM on Red Hat Enterprise Linux (RHEL), use the yum update command to install the updated version. 1.8.11.1. Known issues If you enter the tkn task start or tkn clustertask start command, the tkn command line utility displays an error message. As a workaround, to start tasks or cluster tasks using the command line, use the version of the tkn utility shipped with OpenShift Pipelines 1.11 or a later version. 1.8.11.2. Fixed issues With this update, the S2I cluster task uses a General Availability container image. With this update, the HTTP/2.0 protocol is not supported for webhooks. All webhook calls to Red Hat OpenShift Pipelines must use the HTTP/1.1 protocol. 1.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.9 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.9.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.9. 1.9.1.1. Pipelines With this update, you can specify pipeline parameters and results in arrays and object dictionary forms. This update provides support for Container Storage Interface (CSI) and projected volumes for your workspace. With this update, you can specify the stdoutConfig and stderrConfig parameters when defining pipeline steps. Defining these parameters helps to capture standard output and standard error, associated with steps, to local files. With this update, you can add variables in the steps[].onError event handler, for example, USD(params.CONTINUE) . With this update, you can use the output from the finally task in the PipelineResults definition. For example, USD(finally.<pipelinetask-name>.result.<result-name>) , where <pipelinetask-name> denotes the pipeline task name and <result-name> denotes the result name. This update supports task-level resource requirements for a task run. With this update, you do not need to recreate parameters that are shared, based on their names, between a pipeline and the defined tasks. This update is part of a developer preview feature. This update adds support for remote resolution, such as built-in git, cluster, bundle, and hub resolvers. 1.9.1.2. Triggers This update adds the Interceptor CRD to define NamespacedInterceptor . You can use NamespacedInterceptor in the kind section of interceptors reference in triggers or in the EventListener specification. This update enables CloudEvents . With this update, you can configure the webhook port number when defining a trigger. This update supports using trigger eventID as input to TriggerBinding . This update supports validation and rotation of certificates for the ClusterInterceptor server. Triggers perform certificate validation for core interceptors and rotate a new certificate to ClusterInterceptor when its certificate expires. 1.9.1.3. CLI This update supports showing annotations in the describe command. This update supports showing pipeline, tasks, and timeout in the pr describe command. This update adds flags to provide pipeline, tasks, and timeout in the pipeline start command. This update supports showing the presence of workspace, optional or mandatory, in the describe command of a task and pipeline. This update adds the timestamps flag to show logs with a timestamp. This update adds a new flag --ignore-running-pipelinerun , which ignores the deletion of TaskRun associated with PipelineRun . This update adds support for experimental commands. This update also adds experimental subcommands, sign and verify to the tkn CLI tool. This update makes the Z shell (Zsh) completion feature usable without generating any files. This update introduces a new CLI tool called opc . It is anticipated that an upcoming release will replace the tkn CLI tool with opc . Important The new CLI tool opc is a Technology Preview feature. opc will be a replacement for tkn with additional Red Hat OpenShift Pipelines specific features, which do not necessarily fit in tkn . 1.9.1.4. Operator With this update, Pipelines as Code is installed by default. You can disable Pipelines as Code by using the -p flag: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}' With this update, you can also modify Pipelines as Code configurations in the TektonConfig CRD. With this update, if you disable the developer perspective, the Operator does not install developer console related custom resources. This update includes ClusterTriggerBinding support for Bitbucket Server and Bitbucket Cloud and helps you to reuse a TriggerBinding across your entire cluster. 1.9.1.5. Resolvers Important Resolvers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, you can configure pipeline resolvers in the TektonConfig CRD. You can enable or disable these pipeline resolvers: enable-bundles-resolver , enable-cluster-resolver , enable-git-resolver , and enable-hub-resolver . apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: enable-bundles-resolver: true enable-cluster-resolver: true enable-git-resolver: true enable-hub-resolver: true ... You can also provide resolver specific configurations in TektonConfig . For example, you can define the following fields in the map[string]string format to set configurations for individual resolvers: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: bundles-resolver-config: default-service-account: pipelines cluster-resolver-config: default-namespace: test git-resolver-config: server-url: localhost.com hub-resolver-config: default-tekton-hub-catalog: tekton ... 1.9.1.6. Tekton Chains Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Before this update, only Open Container Initiative (OCI) images were supported as outputs of TaskRun in the in-toto provenance agent. This update adds in-toto provenance metadata as outputs with these suffixes, ARTIFACT_URI and ARTIFACT_DIGEST . Before this update, only TaskRun attestations were supported. This update adds support for PipelineRun attestations as well. This update adds support for Tekton Chains to get the imgPullSecret parameter from the pod template. This update helps you to configure repository authentication based on each pipeline run or task run without modifying the service account. 1.9.1.7. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, as an administrator, you can use an external database, such as Crunchy PostgreSQL with Tekton Hub, instead of using the default Tekton Hub database. This update helps you to perform the following actions: Specify the coordinates of an external database to be used with Tekton Hub Disable the default Tekton Hub database deployed by the Operator This update removes the dependency of config.yaml from external Git repositories and moves the complete configuration data into the API ConfigMap . This update helps an administrator to perform the following actions: Add the configuration data, such as categories, catalogs, scopes, and defaultScopes in the Tekton Hub custom resource. Modify Tekton Hub configuration data on the cluster. All modifications are preserved upon Operator upgrades. Update the list of catalogs for Tekton Hub Change the categories for Tekton Hub Note If you do not add any configuration data, you can use the default data in the API ConfigMap for Tekton Hub configurations. 1.9.1.8. Pipelines as Code This update adds support for concurrency limit in the Repository CRD to define the maximum number of PipelineRuns running for a repository at a time. The PipelineRuns from a pull request or a push event are queued in alphabetical order. This update adds a new command tkn pac logs for showing the logs of the latest pipeline run for a repository. This update supports advanced event matching on file path for push and pull requests to GitHub and GitLab. For example, you can use the Common Expression Language (CEL) to run a pipeline only if a path has changed for any markdown file in the docs directory. ... annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && "docs/*.md".pathChanged() With this update, you can reference a remote pipeline in the pipelineRef: object using annotations. With this update, you can auto-configure new GitHub repositories with Pipelines as Code, which sets up a namespace and creates a Repository CRD for your GitHub repository. With this update, Pipelines as Code generates metrics for PipelineRuns with provider information. This update provides the following enhancements for the tkn-pac plugin: Detects running pipelines correctly Fixes showing duration when there is no failure completion time Shows an error snippet and highlights the error regular expression pattern in the tkn-pac describe command Adds the use-real-time switch to the tkn-pac ls and tkn-pac describe commands Imports the tkn-pac logs documentation Shows pipelineruntimeout as a failure in the tkn-pac ls and tkn-pac describe commands. Show a specific pipeline run failure with the --target-pipelinerun option. With this update, you can view the errors for your pipeline run in the form of a version control system (VCS) comment or a small snippet in the GitHub checks. With this update, Pipelines as Code optionally can detect errors inside the tasks if they are of a simple format and add those tasks as annotations in GitHub. This update is part of a developer preview feature. This update adds the following new commands: tkn-pac webhook add : Adds a webhook to project repository settings and updates the webhook.secret key in the existing k8s Secret object without updating the repository. tkn-pac webhook update-token : Updates provider token for an existing k8s Secret object without updating the repository. This update enhances functionality of the tkn-pac create repo command, which creates and configures webhooks for GitHub, GitLab, and BitbucketCloud along with creating repositories. With this update, the tkn-pac describe command shows the latest fifty events in a sorted order. This update adds the --last option to the tkn-pac logs command. With this update, the tkn-pac resolve command prompts for a token on detecting a git_auth_secret in the file template. With this update, Pipelines as Code hides secrets from log snippets to avoid exposing secrets in the GitHub interface. With this update, the secrets automatically generated for git_auth_secret are an owner reference with PipelineRun . The secrets get cleaned with the PipelineRun , not after the pipeline run execution. This update adds support to cancel a pipeline run with the /cancel comment. Before this update, the GitHub apps token scoping was not defined and tokens would be used on every repository installation. With this update, you can scope the GitHub apps token to the target repository using the following parameters: secret-github-app-token-scoped : Scopes the app token to the target repository, not to every repository the app installation has access to. secret-github-app-scope-extra-repos : Customizes the scoping of the app token with an additional owner or repository. With this update, you can use Pipelines as Code with your own Git repositories that are hosted on GitLab. With this update, you can access pipeline execution details in the form of kubernetes events in your namespace. These details help you to troubleshoot pipeline errors without needing access to admin namespaces. This update supports authentication of URLs in the Pipelines as Code resolver with the Git provider. With this update, you can set the name of the hub catalog by using a setting in the pipelines-as-code config map. With this update, you can set the maximum and default limits for the max-keep-run parameter. This update adds documents on how to inject custom Secure Sockets Layer (SSL) certificates in Pipelines as Code to let you connect to provider instance with custom certificates. With this update, the PipelineRun resource definition has the log URL included as an annotation. For example, the tkn-pac describe command shows the log link when describing a PipelineRun . With this update, tkn-pac logs show repository name, instead of PipelineRun name. 1.9.2. Breaking changes With this update, the Conditions custom resource definition (CRD) type has been removed. As an alternative, use the WhenExpressions instead. With this update, support for tekton.dev/v1alpha1 API pipeline resources, such as Pipeline, PipelineRun, Task, Clustertask, and TaskRun has been removed. With this update, the tkn-pac setup command has been removed. Instead, use the tkn-pac webhook add command to re-add a webhook to an existing Git repository. And use the tkn-pac webhook update-token command to update the personal provider access token for an existing Secret object in the Git repository. With this update, a namespace that runs a pipeline with default settings does not apply the pod-security.kubernetes.io/enforce:privileged label to a workload. 1.9.3. Deprecated and removed features In the Red Hat OpenShift Pipelines 1.9.0 release, ClusterTasks are deprecated and planned to be removed in a future release. As an alternative, you can use Cluster Resolver . In the Red Hat OpenShift Pipelines 1.9.0 release, the use of the triggers and the namespaceSelector fields in a single EventListener specification is deprecated and planned to be removed in a future release. You can use these fields in different EventListener specifications successfully. In the Red Hat OpenShift Pipelines 1.9.0 release, the tkn pipelinerun describe command does not display timeouts for the PipelineRun resource. In the Red Hat OpenShift Pipelines 1.9.0 release, the PipelineResource` custom resource (CR) is deprecated. The PipelineResource CR was a Tech Preview feature and part of the tekton.dev/v1alpha1 API. In the Red Hat OpenShift Pipelines 1.9.0 release, custom image parameters from cluster tasks are deprecated. As an alternative, you can copy a cluster task and use your custom image in it. 1.9.4. Known issues The chains-secret and chains-config config maps are removed after you uninstall the Red Hat OpenShift Pipelines Operator. As they contain user data, they should be preserved and not deleted. When running the tkn pac set of commands on Windows, you may receive the following error message: Command finished with error: not supported by Windows. Workaround: Set the NO_COLOR environment variable to true . Running the tkn pac resolve -f <filename> | oc create -f command may not provide expected results, if the tkn pac resolve command uses a templated parameter value to function. Workaround: To mitigate this issue, save the output of tkn pac resolve in a temporary file by running the tkn pac resolve -f <filename> -o tempfile.yaml command and then run the oc create -f tempfile.yaml command. For example, tkn pac resolve -f <filename> -o /tmp/pull-request-resolved.yaml && oc create -f /tmp/pull-request-resolved.yaml . 1.9.5. Fixed issues Before this update, after replacing an empty array, the original array returned an empty string rendering the paramaters inside it invalid. With this update, this issue is resolved and the original array returns as empty. Before this update, if duplicate secrets were present in a service account for a pipelines run, it resulted in failure in task pod creation. With this update, this issue is resolved and the task pod is created successfully even if duplicate secrets are present in a service account. Before this update, by looking at the TaskRun's spec.StatusMessage field, users could not distinguish whether the TaskRun had been cancelled by the user or by a PipelineRun that was part of it. With this update, this issue is resolved and users can distinguish the status of the TaskRun by looking at the TaskRun's spec.StatusMessage field. Before this update, webhook validation was removed on deletion of old versions of invalid objects. With this update, this issue is resolved. Before this update, if you set the timeouts.pipeline parameter to 0 , you could not set the timeouts.tasks parameter or the timeouts.finally parameters. This update resolves the issue. Now, when you set the timeouts.pipeline parameter value, you can set the value of either the`timeouts.tasks` parameter or the timeouts.finally parameter. For example: yaml kind: PipelineRun spec: timeouts: pipeline: "0" # No timeout tasks: "0h3m0s" Before this update, a race condition could occur if another tool updated labels or annotations on a PipelineRun or TaskRun. With this update, this issue is resolved and you can merge labels or annotations. Before this update, log keys did not have the same keys as in pipelines controllers. With this update, this issue has been resolved and the log keys have been updated to match the log stream of pipeline controllers. The keys in logs have been changed from "ts" to "timestamp", from "level" to "severity", and from "message" to "msg". Before this update, if a PipelineRun was deleted with an unknown status, an error message was not generated. With this update, this issue is resolved and an error message is generated. Before this update, to access bundle commands like list and push , it was required to use the kubeconfig file . With this update, this issue has been resolved and the kubeconfig file is not required to access bundle commands. Before this update, if the parent PipelineRun was running while deleting TaskRuns, then TaskRuns would be deleted. With this update, this issue is resolved and TaskRuns are not getting deleted if the parent PipelineRun is running. Before this update, if the user attempted to build a bundle with more objects than the pipeline controller permitted, the Tekton CLI did not display an error message. With this update, this issue is resolved and the Tekton CLI displays an error message if the user attempts to build a bundle with more objects than the limit permitted in the pipeline controller. Before this update, if namespaces were removed from the cluster, then the operator did not remove namespaces from the ClusterInterceptor ClusterRoleBinding subjects. With this update, this issue has been resolved, and the operator removes the namespaces from the ClusterInterceptor ClusterRoleBinding subjects. Before this update, the default installation of the Red Hat OpenShift Pipelines Operator resulted in the pipelines-scc-rolebinding security context constraint (SCC) role binding resource remaining in the cluster. With this update, the default installation of the Red Hat OpenShift Pipelines Operator results in the pipelines-scc-rolebinding security context constraint (SCC) role binding resource resource being removed from the cluster. Before this update, Pipelines as Code did not get updated values from the Pipelines as Code ConfigMap object. With this update, this issue is fixed and the Pipelines as Code ConfigMap object looks for any new changes. Before this update, Pipelines as Code controller did not wait for the tekton.dev/pipeline label to be updated and added the checkrun id label, which would cause race conditions. With this update, the Pipelines as Code controller waits for the tekton.dev/pipeline label to be updated and then adds the checkrun id label, which helps to avoid race conditions. Before this update, the tkn-pac create repo command did not override a PipelineRun if it already existed in the git repository. With this update, tkn-pac create command is fixed to override a PipelineRun if it exists in the git repository and this resolves the issue successfully. Before this update, the tkn pac describe command did not display reasons for every message. With this update, this issue is fixed and the tkn pac describe command displays reasons for every message. Before this update, a pull request failed if the user in the annotation provided values by using a regex form, for example, refs/head/rel-* . The pull request failed because it was missing refs/heads in its base branch. With this update, the prefix is added and checked that it matches. This resolves the issue and the pull request is successful. 1.9.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.9.7. Fixed issues Before this update, the tkn pac repo list command did not run on Microsoft Windows. This update fixes the issue, and now you can run the tkn pac repo list command on Microsoft Windows. Before this update, Pipelines as Code watcher did not receive all the configuration change events. With this update, the Pipelines as Code watcher is updated, and now the Pipelines as Code watcher does not miss the configuration change events. Before this update, the pods created by Pipelines as Code, such as TaskRuns or PipelineRuns could not access custom certificates exposed by the user in the cluster. This update fixes the issue, and you can now access custom certificates from the TaskRuns or PipelineRuns pods in the cluster. Before this update, on a cluster enabled with FIPS, the tekton-triggers-core-interceptors core interceptor used in the Trigger resource did not function after the Pipelines Operator was upgraded to version 1.9. This update resolves the issue. Now, OpenShift uses MInTLS 1.2 for all its components. As a result, the tekton-triggers-core-interceptors core interceptor updates to TLS version 1.2and its functionality runs accurately. Before this update, when using a pipeline run with an internal OpenShift image registry, the URL to the image had to be hardcoded in the pipeline run definition. For example: ... - name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>' ... When using a pipeline run in the context of Pipelines as Code, such hardcoded values prevented the pipeline run definitions from being used in different clusters and namespaces. With this update, you can use the dynamic template variables instead of hardcoding the values for namespaces and pipeline run names to generalize pipeline run definitions. For example: ... - name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/USD(context.pipelineRun.name)' ... Before this update, Pipelines as Code used the same GitHub token to fetch a remote task available in the same host only on the default GitHub branch. This update resolves the issue. Now Pipelines as Code uses the same GitHub token to fetch a remote task from any GitHub branch. 1.9.8. Known issues The value for CATALOG_REFRESH_INTERVAL , a field in the Hub API ConfigMap object used in the Tekton Hub CR, is not getting updated with a custom value provided by the user. Workaround: None. You can track the issue SRVKP-2854 . 1.9.9. Breaking changes With this update, an OLM misconfiguration issue has been introduced, which prevents the upgrade of the OpenShift Container Platform. This issue will be fixed in a future release. 1.9.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 1.9.11. Fixed issues Before this update, an OLM misconfiguration issue had been introduced in the version of the release, which prevented the upgrade of OpenShift Container Platform. With this update, this misconfiguration issue has been fixed. 1.9.12. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.3 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13. 1.9.13. Fixed issues This update fixes the performance issues for huge pipelines. Now, the CPU usage is reduced by 61% and the memory usage is reduced by 44%. Before this update, a pipeline run would fail if a task did not run because of its when expression. This update fixes the issue by preventing the validation of a skipped task result in pipeline results. Now, the pipeline result is not emitted and the pipeline run does not fail because of a missing result. This update fixes the pipelineref.bundle conversion to the bundle resolver for the v1beta1 API. Now, the conversion feature sets the value of the kind field to Pipeline after conversion. Before this update, an issue in the OpenShift Pipelines Operator prevented the user from setting the value of the spec.pipeline.enable-api-fields field to beta . This update fixes the issue. Now, you can set the value to beta along with alpha and stable in the TektonConfig custom resource. Before this update, when Pipelines as Code could not create a secret due to a cluster error, it would show the temporary token on the GitHub check run, which is public. This update fixes the issue. Now, the token is no longer displayed on the GitHub checks interface when the creation of the secret fails. 1.9.14. Known issues There is currently a known issue with the stop option for pipeline runs in the OpenShift Container Platform web console. The stop option in the Actions drop-down list is not working as expected and does not cancel the pipeline run. There is currently a known issue with upgrading to OpenShift Pipelines version 1.9.x due to a failing custom resource definition conversion. Workaround: Before upgrading to OpenShift Pipelines version 1.9.x, perform the step mentioned in the solution on the Red Hat Customer Portal. 1.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.8 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8 is available on OpenShift Container Platform 4.10, 4.11, and 4.12. 1.10.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.8. 1.10.1.1. Pipelines With this update, you can run Red Hat OpenShift Pipelines GA 1.8 and later on an OpenShift Container Platform cluster that is running on ARM hardware. This includes support for ClusterTask resources and the tkn CLI tool. Important Running Red Hat OpenShift Pipelines on ARM hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update implements Step and Sidecar overrides for TaskRun resources. This update adds minimal TaskRun and Run statuses within PipelineRun statuses. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, the graceful termination of pipeline runs feature is promoted from an alpha feature to a stable feature. As a result, the previously deprecated PipelineRunCancelled status remains deprecated and is planned to be removed in a future release. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. With this update, you can specify the workspace for a pipeline task by using the name of the workspace. This change makes it easier to specify a shared workspace for a pair of Pipeline and PipelineTask resources. You can also continue to map workspaces explicitly. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, parameters in embedded specifications are propagated without mutations. With this update, you can specify the required metadata of a Task resource referenced by a PipelineRun resource by using annotations and labels. This way, Task metadata that depends on the execution context is available during the pipeline run. This update adds support for object or dictionary types in params and results values. This change affects backward compatibility and sometimes breaks forward compatibility, such as using an earlier client with a later Red Hat OpenShift Pipelines version. This update changes the ArrayOrStruct structure, which affects projects that use the Go language API as a library. This update adds a SkippingReason value to the SkippedTasks field of the PipelineRun status fields so that users know why a given PipelineTask was skipped. This update supports an alpha feature in which you can use an array type for emitting results from a Task object. The result type is changed from string to ArrayOrString . For example, a task can specify a type to produce an array result: kind: Task apiVersion: tekton.dev/v1beta1 metadata: name: write-array annotations: description: | A simple task that writes array spec: results: - name: array-results type: array description: The array results ... Additionally, you can run a task script to populate the results with an array: USD echo -n "[\"hello\",\"world\"]" | tee USD(results.array-results.path) To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . This feature is in progress and is part of TEP-0076. 1.10.1.2. Triggers This update transitions the TriggerGroups field in the EventListener specification from an alpha feature to a stable feature. Using this field, you can specify a set of interceptors before selecting and running a group of triggers. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. With this update, the Trigger resource supports end-to-end secure connections by running the ClusterInterceptor server using HTTPS. 1.10.1.3. CLI With this update, you can use the tkn taskrun export command to export a live task run from a cluster to a YAML file, which you can use to import the task run to another cluster. With this update, you can add the -o name flag to the tkn pipeline start command to print the name of the pipeline run right after it starts. This update adds a list of available plugins to the output of the tkn --help command. With this update, while deleting a pipeline run or task run, you can use both the --keep and --keep-since flags together. With this update, you can use Cancelled as the value of the spec.status field rather than the deprecated PipelineRunCancelled value. 1.10.1.4. Operator With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom database rather than the default database. With this update, as a cluster administrator, if you enable your local Tekton Hub instance, it periodically refreshes the database so that changes in the catalog appear in the Tekton Hub web console. You can adjust the period between refreshes. Previously, to add the tasks and pipelines in the catalog to the database, you performed that task manually or set up a cron job to do it for you. With this update, you can install and run a Tekton Hub instance with minimal configuration. This way, you can start working with your teams to decide which additional customizations they might want. This update adds GIT_SSL_CAINFO to the git-clone task so you can clone secured repositories. 1.10.1.5. Tekton Chains Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, you can log in to a vault by using OIDC rather than a static token. This change means that Spire can generate the OIDC credential so that only trusted workloads are allowed to log in to the vault. Additionally, you can pass the vault address as a configuration value rather than inject it as an environment variable. The chains-config config map for Tekton Chains in the openshift-pipelines namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator because directly updating the config map is not supported when installed by using the Red Hat OpenShift Pipelines Operator. However, with this update, you can configure Tekton Chains by using the TektonChain custom resource. This feature enables your configuration to persist after upgrading, unlike the chains-config config map, which gets overwritten during upgrades. 1.10.1.6. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, if you install a fresh instance of Tekton Hub by using the Operator, the Tekton Hub login is disabled by default. To enable the login and rating features, you must create the Hub API secret while installing Tekton Hub. Note Because Tekton Hub login was enabled by default in Red Hat OpenShift Pipelines 1.7, if you upgrade the Operator, the login is enabled by default in Red Hat OpenShift Pipelines 1.8. To disable this login, see Disabling Tekton Hub login after upgrading from OpenShift Pipelines 1.7.x --> 1.8.x With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom PostgreSQL 13 database rather than the default database. To do so, create a Secret resource named tekton-hub-db . For example: apiVersion: v1 kind: Secret metadata: name: tekton-hub-db labels: app: tekton-hub-db type: Opaque stringData: POSTGRES_HOST: <hostname> POSTGRES_DB: <database_name> POSTGRES_USER: <username> POSTGRES_PASSWORD: <password> POSTGRES_PORT: <listening_port_number> With this update, you no longer need to log in to the Tekton Hub web console to add resources from the catalog to the database. Now, these resources are automatically added when the Tekton Hub API starts running for the first time. This update automatically refreshes the catalog every 30 minutes by calling the catalog refresh API job. This interval is user-configurable. 1.10.1.7. Pipelines as Code Important Pipelines as Code (PAC) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, as a developer, you get a notification from the tkn-pac CLI tool if you try to add a duplicate repository to a Pipelines as Code run. When you enter tkn pac create repository , each repository must have a unique URL. This notification also helps prevent hijacking exploits. With this update, as a developer, you can use the new tkn-pac setup cli command to add a Git repository to Pipelines as Code by using the webhook mechanism. This way, you can use Pipelines as Code even when using GitHub Apps is not feasible. This capability includes support for repositories on GitHub, GitLab, and BitBucket. With this update, Pipelines as Code supports GitLab integration with features such as the following: ACL (Access Control List) on project or group /ok-to-test support from allowed users /retest support. With this update, you can perform advanced pipeline filtering with Common Expression Language (CEL). With CEL, you can match pipeline runs with different Git provider events by using annotations in the PipelineRun resource. For example: ... annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch == "main" && source_branch == "wip" Previously, as a developer, you could have only one pipeline run in your .tekton directory for each Git event, such as a pull request. With this update, you can have multiple pipeline runs in your .tekton directory. The web console displays the status and reports of the runs. The pipeline runs operate in parallel and report back to the Git provider interface. With this update, you can test or retest a pipeline run by commenting /test or /retest on a pull request. You can also specify the pipeline run by name. For example, you can enter /test <pipelinerun_name> or /retest <pipelinerun-name> . With this update, you can delete a repository custom resource and its associated secrets by using the new tkn-pac delete repository command. 1.10.2. Breaking changes This update changes the default metrics level of TaskRun and PipelineRun resources to the following values: apiVersion: v1 kind: ConfigMap metadata: name: config-observability namespace: tekton-pipelines labels: app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines data: _example: | ... metrics.taskrun.level: "task" metrics.taskrun.duration-type: "histogram" metrics.pipelinerun.level: "pipeline" metrics.pipelinerun.duration-type: "histogram" With this update, if an annotation or label is present in both Pipeline and PipelineRun resources, the value in the Run type takes precedence. The same is true if an annotation or label is present in Task and TaskRun resources. In Red Hat OpenShift Pipelines 1.8, the previously deprecated PipelineRun.Spec.ServiceAccountNames field has been removed. Use the PipelineRun.Spec.TaskRunSpecs field instead. In Red Hat OpenShift Pipelines 1.8, the previously deprecated TaskRun.Status.ResourceResults.ResourceRef field has been removed. Use the TaskRun.Status.ResourceResults.ResourceName field instead. In Red Hat OpenShift Pipelines 1.8, the previously deprecated Conditions resource type has been removed. Remove the Conditions resource from Pipeline resource definitions that include it. Use when expressions in PipelineRun definitions instead. For Tekton Chains, the tekton-provenance format has been removed in this release. Use the in-toto format by setting "artifacts.taskrun.format": "in-toto" in the TektonChain custom resource instead. Red Hat OpenShift Pipelines 1.7.x shipped with Pipelines as Code 0.5.x. The current update ships with Pipelines as Code 0.10.x. This change creates a new route in the openshift-pipelines namespace for the new controller. You must update this route in GitHub Apps or webhooks that use Pipelines as Code. To fetch the route, use the following command: USD oc get route -n openshift-pipelines pipelines-as-code-controller \ --template='https://{{ .spec.host }}' With this update, Pipelines as Code renames the default secret keys for the Repository custom resource definition (CRD). In your CRD, replace token with provider.token , and replace secret with webhook.secret . With this update, Pipelines as Code replaces a special template variable with one that supports multiple pipeline runs for private repositories. In your pipeline runs, replace secret: pac-git-basic-auth-{{repo_owner}}-{{repo_name}} with secret: {{ git_auth_secret }} . With this update, Pipelines as Code updates the following commands in the tkn-pac CLI tool: Replace tkn pac repository create with tkn pac create repository . Replace tkn pac repository delete with tkn pac delete repository . Replace tkn pac repository list with tkn pac list . 1.10.3. Deprecated and removed features Starting with OpenShift Container Platform 4.11, the preview and stable channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are removed. To install and upgrade the Operator, use the appropriate pipelines-<version> channel, or the latest channel for the most recent stable version. For example, to install the OpenShift Pipelines Operator version 1.8.x , use the pipelines-1.8 channel. Note In OpenShift Container Platform 4.10 and earlier versions, you can use the preview and stable channels for installing and upgrading the Operator. Support for the tekton.dev/v1alpha1 API version, which was deprecated in Red Hat OpenShift Pipelines GA 1.6, is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. This change affects the pipeline component, which includes the TaskRun , PipelineRun , Task , Pipeline , and similar tekton.dev/v1alpha1 resources. As an alternative, update existing resources to use apiVersion: tekton.dev/v1beta1 as described in Migrating From Tekton v1alpha1 to Tekton v1beta1 . Bug fixes and support for the tekton.dev/v1alpha1 API version are provided only through the end of the current GA 1.8 lifecycle. Important For the Tekton Operator , the operator.tekton.dev/v1alpha1 API version is not deprecated. You do not need to make changes to this value. In Red Hat OpenShift Pipelines 1.8, the PipelineResource custom resource (CR) is available but no longer supported. The PipelineResource CR was a Tech Preview feature and part of the tekton.dev/v1alpha1 API, which had been deprecated and planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. In Red Hat OpenShift Pipelines 1.8, the Condition custom resource (CR) is removed. The Condition CR was part of the tekton.dev/v1alpha1 API, which has been deprecated and is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. In Red Hat OpenShift Pipelines 1.8, the gcr.io image for gsutil has been removed. This removal might break clusters with Pipeline resources that depend on this image. Bug fixes and support are provided only through the end of the Red Hat OpenShift Pipelines 1.7 lifecycle. In Red Hat OpenShift Pipelines 1.8, the PipelineRun.Status.TaskRuns and PipelineRun.Status.Runs fields are deprecated and are planned to be removed in a future release. See TEP-0100: Embedded TaskRuns and Runs Status in PipelineRuns . In Red Hat OpenShift Pipelines 1.8, the pipelineRunCancelled state is deprecated and planned to be removed in a future release. Graceful termination of PipelineRun objects is now promoted from an alpha feature to a stable feature. (See TEP-0058: Graceful Pipeline Run Termination .) As an alternative, you can use the Cancelled state, which replaces the pipelineRunCancelled state. You do not need to make changes to your Pipeline and Task resources. If you have tools that cancel pipeline runs, you must update tools in the release. This change also affects tools such as the CLI, IDE extensions, and so on, so that they support the new PipelineRun statuses. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. In Red Hat OpenShift Pipelines 1.8, the timeout field in PipelineRun has been deprecated. Instead, use the PipelineRun.Timeouts field, which is now promoted from an alpha feature to a stable feature. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. In Red Hat OpenShift Pipelines 1.8, init containers are omitted from the LimitRange object's default request calculations. 1.10.4. Known issues The s2i-nodejs pipeline cannot use the nodejs:14-ubi8-minimal image stream to perform source-to-image (S2I) builds. Using that image stream produces an error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 message. Workaround: Use nodejs:14-ubi8 rather than the nodejs:14-ubi8-minimal image stream. When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. Workaround: Specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . Tip Before you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub , verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name> On ARM, IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported. Implicit parameter mapping incorrectly passes parameters from the top-level Pipeline or PipelineRun definitions to the taskRef tasks. Mapping should only occur from a top-level resource to tasks with in-line taskSpec specifications. This issue only affects clusters where this feature was enabled by setting the enable-api-fields field to alpha in the pipeline section of the TektonConfig custom resource definition. 1.10.5. Fixed issues Before this update, the metrics for pipeline runs in the Developer view of the web console were incomplete and outdated. With this update, the issue has been fixed so that the metrics are correct. Before this update, if a pipeline had two parallel tasks that failed and one of them had retries=2 , the final tasks never ran, and the pipeline timed out and failed to run. For example, the pipelines-operator-subscription task failed intermittently with the following error message: Unable to connect to the server: EOF . With this update, the issue has been fixed so that the final tasks always run. Before this update, if a pipeline run stopped because a task run failed, other task runs might not complete their retries. As a result, no finally tasks were scheduled, which caused the pipeline to hang. This update resolves the issue. TaskRuns and Run objects can retry when a pipeline run has stopped, even by graceful stopping, so that pipeline runs can complete. This update changes how resource requirements are calculated when one or more LimitRange objects are present in the namespace where a TaskRun object exists. The scheduler now considers step containers and excludes all other app containers, such as sidecar containers, when factoring requests from LimitRange objects. Before this update, under specific conditions, the flag package might incorrectly parse a subcommand immediately following a double dash flag terminator, -- . In that case, it ran the entrypoint subcommand rather than the actual command. This update fixes this flag-parsing issue so that the entrypoint runs the correct command. Before this update, the controller might generate multiple panics if pulling an image failed, or its pull status was incomplete. This update fixes the issue by checking the step.ImageID value rather than the status.TaskSpec value. Before this update, canceling a pipeline run that contained an unscheduled custom task produced a PipelineRunCouldntCancel error. This update fixes the issue. You can cancel a pipeline run that contains an unscheduled custom task without producing that error. Before this update, if the <NAME> in USDparams["<NAME>"] or USDparams['<NAME>'] contained a dot character ( . ), any part of the name to the right of the dot was not extracted. For example, from USDparams["org.ipsum.lorem"] , only org was extracted. This update fixes the issue so that USDparams fetches the complete value. For example, USDparams["org.ipsum.lorem"] and USDparams['org.ipsum.lorem'] are valid and the entire value of <NAME> , org.ipsum.lorem , is extracted. It also throws an error if <NAME> is not enclosed in single or double quotes. For example, USDparams.org.ipsum.lorem is not valid and generates a validation error. With this update, Trigger resources support custom interceptors and ensure that the port of the custom interceptor service is the same as the port in the ClusterInterceptor definition file. Before this update, the tkn version command for Tekton Chains and Operator components did not work correctly. This update fixes the issue so that the command works correctly and returns version information for those components. Before this update, if you ran a tkn pr delete --ignore-running command and a pipeline run did not have a status.condition value, the tkn CLI tool produced a null-pointer error (NPE). This update fixes the issue so that the CLI tool now generates an error and correctly ignores pipeline runs that are still running. Before this update, if you used the tkn pr delete --keep <value> or tkn tr delete --keep <value> commands, and the number of pipeline runs or task runs was less than the value, the command did not return an error as expected. This update fixes the issue so that the command correctly returns an error under those conditions. Before this update, if you used the tkn pr delete or tkn tr delete commands with the -p or -t flags together with the --ignore-running flag, the commands incorrectly deleted running or pending resources. This update fixes the issue so that these commands correctly ignore running or pending resources. With this update, you can configure Tekton Chains by using the TektonChain custom resource. This feature enables your configuration to persist after upgrading, unlike the chains-config config map, which gets overwritten during upgrades. With this update, ClusterTask resources no longer run as root by default, except for the buildah and s2i cluster tasks. Before this update, tasks on Red Hat OpenShift Pipelines 1.7.1 failed when using init as a first argument followed by two or more arguments. With this update, the flags are parsed correctly, and the task runs are successful. Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to an invalid role binding, with the following error message: error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef This update fixes the issue so that the failure no longer occurs. Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the pipeline service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the pipeline service account. As a result, secrets attached to the pipeline service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly. With this update, Pipelines as Code pods run on infrastructure nodes if infrastructure node settings are configured in the TektonConfig custom resource (CR). Previously, with the resource pruner, each namespace Operator created a command that ran in a separate container. This design consumed too many resources in clusters with a high number of namespaces. For example, to run a single command, a cluster with 1000 namespaces produced 1000 containers in a pod. This update fixes the issue. It passes the namespace-based configuration to the job so that all the commands run in one container in a loop. In Tekton Chains, you must define a secret called signing-secrets to hold the key used for signing tasks and images. However, before this update, updating the Red Hat OpenShift Pipelines Operator reset or overwrote this secret, and the key was lost. This update fixes the issue. Now, if the secret is configured after installing Tekton Chains through the Operator, the secret persists, and it is not overwritten by upgrades. Before this update, all S2I build tasks failed with an error similar to the following message: Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted" time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)" With this update, the pipelines-scc security context constraint (SCC) is compatible with the SETFCAP capability necessary for Buildah and S2I cluster tasks. As a result, the Buildah and S2I build tasks can run successfully. To successfully run the Buildah cluster task and S2I build tasks for applications written in various languages and frameworks, add the following snippet for appropriate steps objects such as build and push : securityContext: capabilities: add: ["SETFCAP"] Before this update, installing the Red Hat OpenShift Pipelines Operator took longer than expected. This update optimizes some settings to speed up the installation process. With this update, Buildah and S2I cluster tasks have fewer steps than in versions. Some steps have been combined into a single step so that they work better with ResourceQuota and LimitRange objects and do not require more resources than necessary. This update upgrades the Buildah, tkn CLI tool, and skopeo CLI tool versions in cluster tasks. Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources. Before this update, pods for the prune cronjobs were not scheduled on infrastructure nodes, as expected. Instead, they were scheduled on worker nodes or not scheduled at all. With this update, these types of pods can now be scheduled on infrastructure nodes if configured in the TektonConfig custom resource (CR). 1.10.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.1 is available on OpenShift Container Platform 4.10, 4.11, and 4.12. 1.10.6.1. Known issues By default, the containers have restricted permissions for enhanced security. The restricted permissions apply to all controller pods in the Red Hat OpenShift Pipelines Operator, and to some cluster tasks. Due to restricted permissions, the git-clone cluster task fails under certain configurations. Workaround: None. You can track the issue SRVKP-2634 . When installer sets are in a failed state, the status of the TektonConfig custom resource is incorrectly displayed as True instead of False . Example: Failed installer sets USD oc get tektoninstallerset NAME READY REASON addon-clustertasks-nx5xz False Error addon-communityclustertasks-cfb2p True addon-consolecli-ftrb8 True addon-openshift-67dj2 True addon-pac-cf7pz True addon-pipelines-fvllm True addon-triggers-b2wtt True addon-versioned-clustertasks-1-8-hqhnw False Error pipeline-w75ww True postpipeline-lrs22 True prepipeline-ldlhw True rhosp-rbac-4dmgb True trigger-hfg64 True validating-mutating-webhoook-28rf7 True Example: Incorrect TektonConfig status USD oc get tektonconfig config NAME VERSION READY REASON config 1.8.1 True 1.10.6.2. Fixed issues Before this update, the pruner deleted task runs of running pipelines and displayed the following warning: some tasks were indicated completed without ancestors being done . With this update, the pruner retains the task runs that are part of running pipelines. Before this update, pipeline-1.8 was the default channel for installing the Red Hat OpenShift Pipelines Operator 1.8.x. With this update, latest is the default channel. Before this update, the Pipelines as Code controller pods did not have access to certificates exposed by the user. With this update, Pipelines as Code can now access routes and Git repositories guarded by a self-signed or a custom certificate. Before this update, the task failed with RBAC errors after upgrading from Red Hat OpenShift Pipelines 1.7.2 to 1.8.0. With this update, the tasks run successfully without any RBAC errors. Before this update, using the tkn CLI tool, you could not remove task runs and pipeline runs that contained a result object whose type was array . With this update, you can use the tkn CLI tool to remove task runs and pipeline runs that contain a result object whose type is array . Before this update, if a pipeline specification contained a task with an ENV_VARS parameter of array type, the pipeline run failed with the following error: invalid input params for task func-buildpacks: param types don't match the user-specified type: [ENV_VARS] . With this update, pipeline runs with such pipeline and task specifications do not fail. Before this update, cluster administrators could not provide a config.json file to the Buildah cluster task for accessing a container registry. With this update, cluster administrators can provide the Buildah cluster task with a config.json file by using the dockerconfig workspace. 1.10.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.2 is available on OpenShift Container Platform 4.10, 4.11, and 4.12. 1.10.7.1. Fixed issues Before this update, the git-clone task failed when cloning a repository using SSH keys. With this update, the role of the non-root user in the git-init task is removed, and the SSH program looks in the USDHOME/.ssh/ directory for the correct keys. 1.11. Release notes for Red Hat OpenShift Pipelines General Availability 1.7 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7 is available on OpenShift Container Platform 4.9, 4.10, and 4.11. 1.11.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.7. 1.11.1.1. Pipelines With this update, pipelines-<version> is the default channel to install the Red Hat OpenShift Pipelines Operator. For example, the default channel to install the OpenShift Pipelines Operator version 1.7 is pipelines-1.7 . Cluster administrators can also use the latest channel to install the most recent stable version of the Operator. Note The preview and stable channels will be deprecated and removed in a future release. When you run a command in a user namespace, your container runs as root (user id 0 ) but has user privileges on the host. With this update, to run pods in the user namespace, you must pass the annotations that CRI-O expects. To add these annotations for all users, run the oc edit clustertask buildah command and edit the buildah cluster task. To add the annotations to a specific namespace, export the cluster task as a task to that namespace. Before this update, if certain conditions were not met, the when expression skipped a Task object and its dependent tasks. With this update, you can scope the when expression to guard the Task object only, not its dependent tasks. To enable this update, set the scope-when-expressions-to-task flag to true in the TektonConfig CRD. Note The scope-when-expressions-to-task flag is deprecated and will be removed in a future release. As a best practice for OpenShift Pipelines, use when expressions scoped to the guarded Task only. With this update, you can use variable substitution in the subPath field of a workspace within a task. With this update, you can reference parameters and results by using a bracket notation with single or double quotes. Prior to this update, you could only use the dot notation. For example, the following are now equivalent: USD(param.myparam) , USD(param['myparam']) , and USD(param["myparam"]) . You can use single or double quotes to enclose parameter names that contain problematic characters, such as "." . For example, USD(param['my.param']) and USD(param["my.param"]) . With this update, you can include the onError parameter of a step in the task definition without enabling the enable-api-fields flag. 1.11.1.2. Triggers With this update, the feature-flag-triggers config map has a new field labels-exclusion-pattern . You can set the value of this field to a regular expression (regex) pattern. The controller filters out labels that match the regex pattern from propagating from the event listener to the resources created for the event listener. With this update, the TriggerGroups field is added to the EventListener specification. Using this field, you can specify a set of interceptors to run before selecting and running a group of triggers. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, Trigger resources support custom runs defined by a TriggerTemplate template. With this update, Triggers support emitting Kubernetes events from an EventListener pod. With this update, count metrics are available for the following objects: ClusterInteceptor , EventListener , TriggerTemplate , ClusterTriggerBinding , and TriggerBinding . This update adds the ServicePort specification to Kubernetes resource. You can use this specification to modify which port exposes the event listener service. The default port is 8080 . With this update, you can use the targetURI field in the EventListener specification to send cloud events during trigger processing. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, the tekton-triggers-eventlistener-roles object now has a patch verb, in addition to the create verb that already exists. With this update, the securityContext.runAsUser parameter is removed from event listener deployment. 1.11.1.3. CLI With this update, the tkn [pipeline | pipelinerun] export command exports a pipeline or pipeline run as a YAML file. For example: Export a pipeline named test_pipeline in the openshift-pipelines namespace: USD tkn pipeline export test_pipeline -n openshift-pipelines Export a pipeline run named test_pipeline_run in the openshift-pipelines namespace: USD tkn pipelinerun export test_pipeline_run -n openshift-pipelines With this update, the --grace option is added to the tkn pipelinerun cancel . Use the --grace option to terminate a pipeline run gracefully instead of forcing the termination. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . This update adds the Operator and Chains versions to the output of the tkn version command. Important Tekton Chains is a Technology Preview feature. With this update, the tkn pipelinerun describe command displays all canceled task runs, when you cancel a pipeline run. Before this fix, only one task run was displayed. With this update, you can skip supplying the asking specifications for optional workspace when you run the tkn [t | p | ct] start command skips with the --skip-optional-workspace flag. You can also skip it when running in interactive mode. With this update, you can use the tkn chains command to manage Tekton Chains. You can also use the --chains-namespace option to specify the namespace where you want to install Tekton Chains. Important Tekton Chains is a Technology Preview feature. 1.11.1.4. Operator With this update, you can use the Red Hat OpenShift Pipelines Operator to install and deploy Tekton Hub and Tekton Chains. Important Tekton Chains and deployment of Tekton Hub on a cluster are Technology Preview features. With this update, you can find and use Pipelines as Code (PAC) as an add-on option. Important Pipelines as Code is a Technology Preview feature. With this update, you can now disable the installation of community cluster tasks by setting the communityClusterTasks parameter to false . For example: ... spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "true" - name: pipelineTemplates value: "true" - name: communityClusterTasks value: "false" ... With this update, you can disable the integration of Tekton Hub with the Developer perspective by setting the enable-devconsole-integration flag in the TektonConfig custom resource to false . For example: ... hub: params: - name: enable-devconsole-integration value: "true" ... With this update, the operator-config.yaml config map enables the output of the tkn version command to display of the Operator version. With this update, the version of the argocd-task-sync-and-wait tasks is modified to v0.2 . With this update to the TektonConfig CRD, the oc get tektonconfig command displays the OPerator version. With this update, service monitor is added to the Triggers metrics. 1.11.1.5. Hub Important Deploying Tekton Hub on a cluster is a Technology Preview feature. Tekton Hub helps you discover, search, and share reusable tasks and pipelines for your CI/CD workflows. A public instance of Tekton Hub is available at hub.tekton.dev . Staring with Red Hat OpenShift Pipelines 1.7, cluster administrators can also install and deploy a custom instance of Tekton Hub on enterprise clusters. You can curate a catalog with reusable tasks and pipelines specific to your organization. 1.11.1.6. Chains Important Tekton Chains is a Technology Preview feature. Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines. By default, Tekton Chains monitors the task runs in your OpenShift Container Platform cluster. Chains takes snapshots of completed task runs, converts them to one or more standard payload formats, and signs and stores all artifacts. Tekton Chains supports the following features: You can sign task runs, task run results, and OCI registry images with cryptographic key types and services such as cosign . You can use attestation formats such as in-toto . You can securely store signatures and signed artifacts using OCI repository as a storage backend. 1.11.1.7. Pipelines as Code (PAC) Important Pipelines as Code is a Technology Preview feature. With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports status. Pipelines as Code supports the following features: Pull request status. When iterating over a pull request, the status and control of the pull request is exercised on the platform hosting the Git repository. GitHub checks the API to set the status of a pipeline run, including rechecks. GitHub pull request and commit events. Pull request actions in comments, such as /retest . Git events filtering, and a separate pipeline for each event. Automatic task resolution in OpenShift Pipelines for local tasks, Tekton Hub, and remote URLs. Use of GitHub blobs and objects API for retrieving configurations. Access Control List (ACL) over a GitHub organization, or using a Prow-style OWNER file. The tkn pac plugin for the tkn CLI tool, which you can use to manage Pipelines as Code repositories and bootstrapping. Support for GitHub Application, GitHub Webhook, Bitbucket Server, and Bitbucket Cloud. 1.11.2. Deprecated features Breaking change: This update removes the disable-working-directory-overwrite and disable-home-env-overwrite fields from the TektonConfig custom resource (CR). As a result, the TektonConfig CR no longer automatically sets the USDHOME environment variable and workingDir parameter. You can still set the USDHOME environment variable and workingDir parameter by using the env and workingDir fields in the Task custom resource definition (CRD). The Conditions custom resource definition (CRD) type is deprecated and planned to be removed in a future release. Instead, use the recommended When expression. Breaking change: The Triggers resource validates the templates and generates an error if you do not specify the EventListener and TriggerBinding values. 1.11.3. Known issues When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . Tip Before you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub , verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name> On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported. You cannot use the nodejs:14-ubi8-minimal image stream because doing so generates the following errors: STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 time="2021-11-04T13:05:26Z" level=error msg="exit status 127" Implicit parameter mapping incorrectly passes parameters from the top-level Pipeline or PipelineRun definitions to the taskRef tasks. Mapping should only occur from a top-level resource to tasks with in-line taskSpec specifications. This issue only affects clusters where this feature was enabled by setting the enable-api-fields field to alpha in the pipeline section of the TektonConfig custom resource definition. 1.11.4. Fixed issues With this update, if metadata such as labels and annotations are present in both Pipeline and PipelineRun object definitions, the values in the PipelineRun type takes precedence. You can observe similar behavior for Task and TaskRun objects. With this update, if the timeouts.tasks field or the timeouts.finally field is set to 0 , then the timeouts.pipeline is also set to 0 . With this update, the -x set flag is removed from scripts that do not use a shebang. The fix reduces potential data leak from script execution. With this update, any backslash character present in the usernames in Git credentials is escaped with an additional backslash in the .gitconfig file. With this update, the finalizer property of the EventListener object is not necessary for cleaning up logging and config maps. With this update, the default HTTP client associated with the event listener server is removed, and a custom HTTP client added. As a result, the timeouts have improved. With this update, the Triggers cluster role now works with owner references. With this update, the race condition in the event listener does not happen when multiple interceptors return extensions. With this update, the tkn pr delete command does not delete the pipeline runs with the ignore-running flag. With this update, the Operator pods do not continue restarting when you modify any add-on parameters. With this update, the tkn serve CLI pod is scheduled on infra nodes, if not configured in the subscription and config custom resources. With this update, cluster tasks with specified versions are not deleted during upgrade. 1.11.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.1 is available on OpenShift Container Platform 4.9, 4.10, and 4.11. 1.11.5.1. Fixed issues Before this update, upgrading the Red Hat OpenShift Pipelines Operator deleted the data in the database associated with Tekton Hub and installed a new database. With this update, an Operator upgrade preserves the data. Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics. Before this update, pipeline runs failed for pipelines containing tasks that emit large termination messages. The pipeline runs failed because the total size of termination messages of all containers in a pod cannot exceed 12 KB. With this update, the place-tools and step-init initialization containers that uses the same image are merged to reduce the number of containers running in each tasks's pod. The solution reduces the chance of failed pipeline runs by minimizing the number of containers running in a task's pod. However, it does not remove the limitation of the maximum allowed size of a termination message. Before this update, attempts to access resource URLs directly from the Tekton Hub web console resulted in an Nginx 404 error. With this update, the Tekton Hub web console image is fixed to allow accessing resource URLs directly from the Tekton Hub web console. Before this update, for each namespace the resource pruner job created a separate container to prune resources. With this update, the resource pruner job runs commands for all namespaces as a loop in one container. 1.11.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.2 is available on OpenShift Container Platform 4.9, 4.10, and the upcoming version. 1.11.6.1. Known issues The chains-config config map for Tekton Chains in the openshift-pipelines namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator. Currently, there is no workaround for this issue. 1.11.6.2. Fixed issues Before this update, tasks on OpenShift Pipelines 1.7.1 failed on using init as the first argument, followed by two or more arguments. With this update, the flags are parsed correctly and the task runs are successful. Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to invalid role binding, with the following error message: error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef With this update, the Red Hat OpenShift Pipelines Operator installs with distinct role binding namespaces to avoid conflict with installation of other Operators. Before this update, upgrading the Operator triggered a reset of the signing-secrets secret key for Tekton Chains to its default value. With this update, the custom secret key persists after you upgrade the Operator. Note Upgrading to Red Hat OpenShift Pipelines 1.7.2 resets the key. However, when you upgrade to future releases, the key is expected to persist. Before this update, all S2I build tasks failed with an error similar to the following message: Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted" time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)" With this update, the pipelines-scc security context constraint (SCC) is compatible with the SETFCAP capability necessary for Buildah and S2I cluster tasks. As a result, the Buildah and S2I build tasks can run successfully. To successfully run the Buildah cluster task and S2I build tasks for applications written in various languages and frameworks, add the following snippet for appropriate steps objects such as build and push : securityContext: capabilities: add: ["SETFCAP"] 1.11.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.3 is available on OpenShift Container Platform 4.9, 4.10, and 4.11. 1.11.7.1. Fixed issues Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources. Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the pipeline service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the pipeline service account. As a result, secrets attached to the pipeline service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly. 1.12. Release notes for Red Hat OpenShift Pipelines General Availability 1.6 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.6 is available on OpenShift Container Platform 4.9. 1.12.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.6. With this update, you can configure a pipeline or task start command to return a YAML or JSON-formatted string by using the --output <string> , where <string> is yaml or json . Otherwise, without the --output option, the start command returns a human-friendly message that is hard for other programs to parse. Returning a YAML or JSON-formatted string is useful for continuous integration (CI) environments. For example, after a resource is created, you can use yq or jq to parse the YAML or JSON-formatted message about the resource and wait until that resource is terminated without using the showlog option. With this update, you can authenticate to a registry using the auth.json authentication file of Podman. For example, you can use tkn bundle push to push to a remote registry using Podman instead of Docker CLI. With this update, if you use the tkn [taskrun | pipelinerun] delete --all command, you can preserve runs that are younger than a specified number of minutes by using the new --keep-since <minutes> option. For example, to keep runs that are less than five minutes old, you enter tkn [taskrun | pipelinerun] delete -all --keep-since 5 . With this update, when you delete task runs or pipeline runs, you can use the --parent-resource and --keep-since options together. For example, the tkn pipelinerun delete --pipeline pipelinename --keep-since 5 command preserves pipeline runs whose parent resource is named pipelinename and whose age is five minutes or less. The tkn tr delete -t <taskname> --keep-since 5 and tkn tr delete --clustertask <taskname> --keep-since 5 commands work similarly for task runs. This update adds support for the triggers resources to work with v1beta1 resources. This update adds an ignore-running option to the tkn pipelinerun delete and tkn taskrun delete commands. This update adds a create subcommand to the tkn task and tkn clustertask commands. With this update, when you use the tkn pipelinerun delete --all command, you can use the new --label <string> option to filter the pipeline runs by label. Optionally, you can use the --label option with = and == as equality operators, or != as an inequality operator. For example, the tkn pipelinerun delete --all --label asdf and tkn pipelinerun delete --all --label==asdf commands both delete all the pipeline runs that have the asdf label. With this update, you can fetch the version of installed Tekton components from the config map or, if the config map is not present, from the deployment controller. With this update, triggers support the feature-flags and config-defaults config map to configure feature flags and to set default values respectively. This update adds a new metric, eventlistener_event_count , that you can use to count events received by the EventListener resource. This update adds v1beta1 Go API types. With this update, triggers now support the v1beta1 API version. With the current release, the v1alpha1 features are now deprecated and will be removed in a future release. Begin using the v1beta1 features instead. In the current release, auto-prunning of resources is enabled by default. In addition, you can configure auto-prunning of task run and pipeline run for each namespace separately, by using the following new annotations: operator.tekton.dev/prune.schedule : If the value of this annotation is different from the value specified at the TektonConfig custom resource definition, a new cron job in that namespace is created. operator.tekton.dev/prune.skip : When set to true , the namespace for which it is configured will not be prunned. operator.tekton.dev/prune.resources : This annotation accepts a comma-separated list of resources. To prune a single resource such as a pipeline run, set this annotation to "pipelinerun" . To prune multiple resources, such as task run and pipeline run, set this annotation to "taskrun, pipelinerun" . operator.tekton.dev/prune.keep : Use this annotation to retain a resource without prunning. operator.tekton.dev/prune.keep-since : Use this annotation to retain resources based on their age. The value for this annotation must be equal to the age of the resource in minutes. For example, to retain resources which were created not more than five days ago, set keep-since to 7200 . Note The keep and keep-since annotations are mutually exclusive. For any resource, you must configure only one of them. operator.tekton.dev/prune.strategy : Set the value of this annotation to either keep or keep-since . Administrators can disable the creation of the pipeline service account for the entire cluster, and prevent privilege escalation by misusing the associated SCC, which is very similar to anyuid . You can now configure feature flags and components by using the TektonConfig custom resource (CR) and the CRs for individual components, such as TektonPipeline and TektonTriggers . This level of granularity helps customize and test alpha features such as the Tekton OCI bundle for individual components. You can now configure optional Timeouts field for the PipelineRun resource. For example, you can configure timeouts separately for a pipeline run, each task run, and the finally tasks. The pods generated by the TaskRun resource now sets the activeDeadlineSeconds field of the pods. This enables OpenShift to consider them as terminating, and allows you to use specifically scoped ResourceQuota object for the pods. You can use configmaps to eliminate metrics tags or labels type on a task run, pipeline run, task, and pipeline. In addition, you can configure different types of metrics for measuring duration, such as a histogram, gauge, or last value. You can define requests and limits on a pod coherently, as Tekton now fully supports the LimitRange object by considering the Min , Max , Default , and DefaultRequest fields. The following alpha features are introduced: A pipeline run can now stop after running the finally tasks, rather than the behavior of stopping the execution of all task run directly. This update adds the following spec.status values: StoppedRunFinally will stop the currently running tasks after they are completed, and then run the finally tasks. CancelledRunFinally will immediately cancel the running tasks, and then run the finally tasks. Cancelled will retain the behavior provided by the PipelineRunCancelled status. Note The Cancelled status replaces the deprecated PipelineRunCancelled status, which will be removed in the v1 version. You can now use the oc debug command to put a task run into debug mode, which pauses the execution and allows you to inspect specific steps in a pod. When you set the onError field of a step to continue , the exit code for the step is recorded and passed on to subsequent steps. However, the task run does not fail and the execution of the rest of the steps in the task continues. To retain the existing behavior, you can set the value of the onError field to stopAndFail . Tasks can now accept more parameters than are actually used. When the alpha feature flag is enabled, the parameters can implicitly propagate to inlined specs. For example, an inlined task can access parameters of its parent pipeline run, without explicitly defining each parameter for the task. If you enable the flag for the alpha features, the conditions under When expressions will only apply to the task with which it is directly associated, and not the dependents of the task. To apply the When expressions to the associated task and its dependents, you must associate the expression with each dependent task separately. Note that, going forward, this will be the default behavior of the When expressions in any new API versions of Tekton. The existing default behavior will be deprecated in favor of this update. The current release enables you to configure node selection by specifying the nodeSelector and tolerations values in the TektonConfig custom resource (CR). The Operator adds these values to all the deployments that it creates. To configure node selection for the Operator's controller and webhook deployment, you edit the config.nodeSelector and config.tolerations fields in the specification for the Subscription CR, after installing the Operator. To deploy the rest of the control plane pods of OpenShift Pipelines on an infrastructure node, update the TektonConfig CR with the nodeSelector and tolerations fields. The modifications are then applied to all the pods created by Operator. 1.12.2. Deprecated features In CLI 0.21.0, support for all v1alpha1 resources for clustertask , task , taskrun , pipeline , and pipelinerun commands are deprecated. These resources are now deprecated and will be removed in a future release. In Tekton Triggers v0.16.0, the redundant status label is removed from the metrics for the EventListener resource. Important Breaking change: The status label has been removed from the eventlistener_http_duration_seconds_* metric. Remove queries that are based on the status label. With the current release, the v1alpha1 features are now deprecated and will be removed in a future release. With this update, you can begin using the v1beta1 Go API types instead. Triggers now supports the v1beta1 API version. With the current release, the EventListener resource sends a response before the triggers finish processing. Important Breaking change: With this change, the EventListener resource stops responding with a 201 Created status code when it creates resources. Instead, it responds with a 202 Accepted response code. The current release removes the podTemplate field from the EventListener resource. Important Breaking change: The podTemplate field, which was deprecated as part of #1100 , has been removed. The current release removes the deprecated replicas field from the specification for the EventListener resource. Important Breaking change: The deprecated replicas field has been removed. In Red Hat OpenShift Pipelines 1.6, the values of HOME="/tekton/home" and workingDir="/workspace" are removed from the specification of the Step objects. Instead, Red Hat OpenShift Pipelines sets HOME and workingDir to the values defined by the containers running the Step objects. You can override these values in the specification of your Step objects. To use the older behavior, you can change the disable-working-directory-overwrite and disable-home-env-overwrite fields in the TektonConfig CR to false : apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false ... Important The disable-working-directory-overwrite and disable-home-env-overwrite fields in the TektonConfig CR are now deprecated and will be removed in a future release. 1.12.3. Known issues When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported. Before you install tasks based on the Tekton Catalog on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub , verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name> You cannot use the nodejs:14-ubi8-minimal image stream because doing so generates the following errors: STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 time="2021-11-04T13:05:26Z" level=error msg="exit status 127" 1.12.4. Fixed issues The tkn hub command is now supported on IBM Power Systems, IBM Z, and LinuxONE. Before this update, the terminal was not available after the user ran a tkn command, and the pipeline run was done, even if retries were specified. Specifying a timeout in the task run or pipeline run had no effect. This update fixes the issue so that the terminal is available after running the command. Before this update, running tkn pipelinerun delete --all would delete all resources. This update prevents the resources in the running state from getting deleted. Before this update, using the tkn version --component=<component> command did not return the component version. This update fixes the issue so that this command returns the component version. Before this update, when you used the tkn pr logs command, it displayed the pipelines output logs in the wrong task order. This update resolves the issue so that logs of completed PipelineRuns are listed in the appropriate TaskRun execution order. Before this update, editing the specification of a running pipeline might prevent the pipeline run from stopping when it was complete. This update fixes the issue by fetching the definition only once and then using the specification stored in the status for verification. This change reduces the probability of a race condition when a PipelineRun or a TaskRun refers to a Pipeline or Task that changes while it is running. When expression values can now have array parameter references, such as: values: [USD(params.arrayParam[*])] . 1.12.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.1 1.12.5.1. Known issues After upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version, OpenShift Pipelines might enter an inconsistent state where you are unable to perform any operations (create/delete/apply) on Tekton resources (tasks and pipelines). For example, while deleting a resource, you might encounter the following error: Error from server (InternalError): Internal error occurred: failed calling webhook "validation.webhook.pipeline.tekton.dev": Post "https://tekton-pipelines-webhook.openshift-pipelines.svc:443/resource-validation?timeout=10s": service "tekton-pipelines-webhook" not found. 1.12.5.2. Fixed issues The SSL_CERT_DIR environment variable ( /tekton-custom-certs ) set by Red Hat OpenShift Pipelines will not override the following default system directories with certificate files: /etc/pki/tls/certs /etc/ssl/certs /system/etc/security/cacerts The Horizontal Pod Autoscaler can manage the replica count of deployments controlled by the Red Hat OpenShift Pipelines Operator. From this release onward, if the count is changed by an end user or an on-cluster agent, the Red Hat OpenShift Pipelines Operator will not reset the replica count of deployments managed by it. However, the replicas will be reset when you upgrade the Red Hat OpenShift Pipelines Operator. The pod serving the tkn CLI will now be scheduled on nodes, based on the node selector and toleration limits specified in the TektonConfig custom resource. 1.12.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.2 1.12.6.1. Known issues When you create a new project, the creation of the pipeline service account is delayed, and removal of existing cluster tasks and pipeline templates takes more than 10 minutes. 1.12.6.2. Fixed issues Before this update, multiple instances of Tekton installer sets were created for a pipeline after upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version. With this update, the Operator ensures that only one instance of each type of TektonInstallerSet exists after an upgrade. Before this update, all the reconcilers in the Operator used the component version to decide resource recreation during an upgrade to Red Hat OpenShift Pipelines 1.6.1 from an older version. As a result, those resources were not recreated whose component versions did not change in the upgrade. With this update, the Operator uses the Operator version instead of the component version to decide resource recreation during an upgrade. Before this update, the pipelines webhook service was missing in the cluster after an upgrade. This was due to an upgrade deadlock on the config maps. With this update, a mechanism is added to disable webhook validation if the config maps are absent in the cluster. As a result, the pipelines webhook service persists in the cluster after an upgrade. Before this update, cron jobs for auto-pruning got recreated after any configuration change to the namespace. With this update, cron jobs for auto-pruning get recreated only if there is a relevant annotation change in the namespace. The upstream version of Tekton Pipelines is revised to v0.28.3 , which has the following fixes: Fix PipelineRun or TaskRun objects to allow label or annotation propagation. For implicit params: Do not apply the PipelineSpec parameters to the TaskRefs object. Disable implicit param behavior for the Pipeline objects. 1.12.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.3 1.12.7.1. Fixed issues Before this update, the Red Hat OpenShift Pipelines Operator installed pod security policies from components such as Pipelines and Triggers. However, the pod security policies shipped as part of the components were deprecated in an earlier release. With this update, the Operator stops installing pod security policies from components. As a result, the following upgrade paths are affected: Upgrading from OpenShift Pipelines 1.6.1 or 1.6.2 to OpenShift Pipelines 1.6.3 deletes the pod security policies, including those from the Pipelines and Triggers components. Upgrading from OpenShift Pipelines 1.5.x to 1.6.3 retains the pod security policies installed from components. As a cluster administrator, you can delete them manually. Note When you upgrade to future releases, the Red Hat OpenShift Pipelines Operator will automatically delete all obsolete pod security policies. Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics. Before this update, role-based access control (RBAC) issues with the OpenShift Pipelines Operator caused problems upgrading or installing components. This update improves the reliability and consistency of installing various Red Hat OpenShift Pipelines components. Before this update, setting the clusterTasks and pipelineTemplates fields to false in the TektonConfig CR slowed the removal of cluster tasks and pipeline templates. This update improves the speed of lifecycle management of Tekton resources such as cluster tasks and pipeline templates. 1.12.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.4 1.12.8.1. Known issues After upgrading from Red Hat OpenShift Pipelines 1.5.2 to 1.6.4, accessing the event listener routes returns a 503 error. Workaround: Modify the target port in the YAML file for the event listener's route. Extract the route name for the relevant namespace. USD oc get route -n <namespace> Edit the route to modify the value of the targetPort field. USD oc edit route -n <namespace> <el-route_name> Example: Existing event listener route ... spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: 8000 to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None ... Example: Modified event listener route ... spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: http-listener to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None ... 1.12.8.2. Fixed issues Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources. Before this update, the task runs failed or restarted due to absence of annotation specifying the release version of the associated Tekton controller. With this update, the inclusion of the appropriate annotations are automated, and the tasks run without failure or restarts. 1.13. Release notes for Red Hat OpenShift Pipelines General Availability 1.5 Red Hat OpenShift Pipelines General Availability (GA) 1.5 is now available on OpenShift Container Platform 4.8. 1.13.1. Compatibility and support matrix Some features in this release are currently in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP Technology Preview GA General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 1.4. Compatibility and support matrix Feature Version Support Status Pipelines 0.24 GA CLI 0.19 GA Catalog 0.24 GA Triggers 0.14 TP Pipeline resources - TP For questions and feedback, you can send an email to the product team at [email protected] . 1.13.2. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.5. Pipeline run and task runs will be automatically pruned by a cron job in the target namespace. The cron job uses the IMAGE_JOB_PRUNER_TKN environment variable to get the value of tkn image . With this enhancement, the following fields are introduced to the TektonConfig custom resource: ... pruner: resources: - pipelinerun - taskrun schedule: "*/5 * * * *" # cron schedule keep: 2 # delete all keeping n ... In OpenShift Container Platform, you can customize the installation of the Tekton Add-ons component by modifying the values of the new parameters clusterTasks and pipelinesTemplates in the TektonConfig custom resource: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "true" - name: pipelineTemplates value: "true" ... The customization is allowed if you create the add-on using TektonConfig , or directly by using Tekton Add-ons. However, if the parameters are not passed, the controller adds parameters with default values. Note If add-on is created using the TektonConfig custom resource, and you change the parameter values later in the Addon custom resource, then the values in the TektonConfig custom resource overwrites the changes. You can set the value of the pipelinesTemplates parameter to true only when the value of the clusterTasks parameter is true . The enableMetrics parameter is added to the TektonConfig custom resource. You can use it to disable the service monitor, which is part of Tekton Pipelines for OpenShift Container Platform. apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines pipeline: params: - name: enableMetrics value: "true" ... Eventlistener OpenCensus metrics, which captures metrics at process level, is added. Triggers now has label selector; you can configure triggers for an event listener using labels. The ClusterInterceptor custom resource definition for registering interceptors is added, which allows you to register new Interceptor types that you can plug in. In addition, the following relevant changes are made: In the trigger specifications, you can configure interceptors using a new API that includes a ref field to refer to a cluster interceptor. In addition, you can use the params field to add parameters that pass on to the interceptors for processing. The bundled interceptors CEL, GitHub, GitLab, and BitBucket, have been migrated. They are implemented using the new ClusterInterceptor custom resource definition. Core interceptors are migrated to the new format, and any new triggers created using the old syntax automatically switch to the new ref or params based syntax. To disable prefixing the name of the task or step while displaying logs, use the --prefix option for log commands. To display the version of a specific component, use the new --component flag in the tkn version command. The tkn hub check-upgrade command is added, and other commands are revised to be based on the pipeline version. In addition, catalog names are displayed in the search command output. Support for optional workspaces are added to the start command. If the plugins are not present in the plugins directory, they are searched in the current path. The tkn start [task | clustertask | pipeline] command starts interactively and ask for the params value, even when you specify the default parameters are specified. To stop the interactive prompts, pass the --use-param-defaults flag at the time of invoking the command. For example: USD tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.15/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-api \ -p git-url=https://github.com/openshift/pipelines-vote-api.git \ -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api \ --use-param-defaults The version field is added in the tkn task describe command. The option to automatically select resources such as TriggerTemplate , or TriggerBinding , or ClusterTriggerBinding , or Eventlistener , is added in the describe command, if only one is present. In the tkn pr describe command, a section for skipped tasks is added. Support for the tkn clustertask logs is added. The YAML merge and variable from config.yaml is removed. In addition, the release.yaml file can now be more easily consumed by tools such as kustomize and ytt . The support for resource names to contain the dot character (".") is added. The hostAliases array in the PodTemplate specification is added to the pod-level override of hostname resolution. It is achieved by modifying the /etc/hosts file. A variable USD(tasks.status) is introduced to access the aggregate execution status of tasks. An entry-point binary build for Windows is added. 1.13.3. Deprecated features In the when expressions, support for fields written is PascalCase is removed. The when expressions only support fields written in lowercase. Note If you had applied a pipeline with when expressions in Tekton Pipelines v0.16 (Operator v1.2.x ), you have to reapply it. When you upgrade the Red Hat OpenShift Pipelines Operator to v1.5 , the openshift-client and the openshift-client-v-1-5-0 cluster tasks have the SCRIPT parameter. However, the ARGS parameter and the git resource are removed from the specification of the openshift-client cluster task. This is a breaking change, and only those cluster tasks that do not have a specific version in the name field of the ClusterTask resource upgrade seamlessly. To prevent the pipeline runs from breaking, use the SCRIPT parameter after the upgrade because it moves the values previously specified in the ARGS parameter into the SCRIPT parameter of the cluster task. For example: ... - name: deploy params: - name: SCRIPT value: oc rollout status <deployment-name> runAfter: - build taskRef: kind: ClusterTask name: openshift-client ... When you upgrade from Red Hat OpenShift Pipelines Operator v1.4 to v1.5 , the profile names in which the TektonConfig custom resource is installed now change. Table 1.5. Profiles for TektonConfig custom resource Profiles in Pipelines 1.5 Corresponding profile in Pipelines 1.4 Installed Tekton components All ( default profile ) All ( default profile ) Pipelines, Triggers, Add-ons Basic Default Pipelines, Triggers Lite Basic Pipelines Note If you used profile: all in the config instance of the TektonConfig custom resource, no change is necessary in the resource specification. However, if the installed Operator is either in the Default or the Basic profile before the upgrade, you must edit the config instance of the TektonConfig custom resource after the upgrade. For example, if the configuration was profile: basic before the upgrade, ensure that it is profile: lite after upgrading to Pipelines 1.5. The disable-home-env-overwrite and disable-working-dir-overwrite fields are now deprecated and will be removed in a future release. For this release, the default value of these flags is set to true for backward compatibility. Note In the release (Red Hat OpenShift Pipelines 1.6), the HOME environment variable will not be automatically set to /tekton/home , and the default working directory will not be set to /workspace for task runs. These defaults collide with any value set by image Dockerfile of the step. The ServiceType and podTemplate fields are removed from the EventListener spec. The controller service account no longer requests cluster-wide permission to list and watch namespaces. The status of the EventListener resource has a new condition called Ready . Note In the future, the other status conditions for the EventListener resource will be deprecated in favor of the Ready status condition. The eventListener and namespace fields in the EventListener response are deprecated. Use the eventListenerUID field instead. The replicas field is deprecated from the EventListener spec. Instead, the spec.replicas field is moved to spec.resources.kubernetesResource.replicas in the KubernetesResource spec. Note The replicas field will be removed in a future release. The old method of configuring the core interceptors is deprecated. However, it continues to work until it is removed in a future release. Instead, interceptors in a Trigger resource are now configured using a new ref and params based syntax. The resulting default webhook automatically switch the usages of the old syntax to the new syntax for new triggers. Use rbac.authorization.k8s.io/v1 instead of the deprecated rbac.authorization.k8s.io/v1beta1 for the ClusterRoleBinding resource. In cluster roles, the cluster-wide write access to resources such as serviceaccounts , secrets , configmaps , and limitranges are removed. In addition, cluster-wide access to resources such as deployments , statefulsets , and deployment/finalizers are removed. The image custom resource definition in the caching.internal.knative.dev group is not used by Tekton anymore, and is excluded in this release. 1.13.4. Known issues The git-cli cluster task is built off the alpine/git base image, which expects /root as the user's home directory. However, this is not explicitly set in the git-cli cluster task. In Tekton, the default home directory is overwritten with /tekton/home for every step of a task, unless otherwise specified. This overwriting of the USDHOME environment variable of the base image causes the git-cli cluster task to fail. This issue is expected to be fixed in the upcoming releases. For Red Hat OpenShift Pipelines 1.5 and earlier versions, you can use any one of the following workarounds to avoid the failure of the git-cli cluster task: Set the USDHOME environment variable in the steps, so that it is not overwritten. [OPTIONAL] If you installed Red Hat OpenShift Pipelines using the Operator, then clone the git-cli cluster task into a separate task. This approach ensures that the Operator does not overwrite the changes made to the cluster task. Execute the oc edit clustertasks git-cli command. Add the expected HOME environment variable to the YAML of the step: ... steps: - name: git env: - name: HOME value: /root image: USD(params.BASE_IMAGE) workingDir: USD(workspaces.source.path) ... Warning For Red Hat OpenShift Pipelines installed by the Operator, if you do not clone the git-cli cluster task into a separate task before changing the HOME environment variable, then the changes are overwritten during Operator reconciliation. Disable overwriting the HOME environment variable in the feature-flags config map. Execute the oc edit -n openshift-pipelines configmap feature-flags command. Set the value of the disable-home-env-overwrite flag to true . Warning If you installed Red Hat OpenShift Pipelines using the Operator, then the changes are overwritten during Operator reconciliation. Modifying the default value of the disable-home-env-overwrite flag can break other tasks and cluster tasks, as it changes the default behavior for all tasks. Use a different service account for the git-cli cluster task, as the overwriting of the HOME environment variable happens when the default service account for pipelines is used. Create a new service account. Link your Git secret to the service account you just created. Use the service account while executing a task or a pipeline. On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task and the tkn hub command are unsupported. When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . 1.13.5. Fixed issues The when expressions in dag tasks are not allowed to specify the context variable accessing the execution status ( USD(tasks.<pipelineTask>.status) ) of any other task. Use Owner UIDs instead of Owner names, as it helps avoid race conditions created by deleting a volumeClaimTemplate PVC, in situations where a PipelineRun resource is quickly deleted and then recreated. A new Dockerfile is added for pullrequest-init for build-base image triggered by non-root users. When a pipeline or task is executed with the -f option and the param in its definition does not have a type defined, a validation error is generated instead of the pipeline or task run failing silently. For the tkn start [task | pipeline | clustertask] commands, the description of the --workspace flag is now consistent. While parsing the parameters, if an empty array is encountered, the corresponding interactive help is displayed as an empty string now. 1.14. Release notes for Red Hat OpenShift Pipelines General Availability 1.4 Red Hat OpenShift Pipelines General Availability (GA) 1.4 is now available on OpenShift Container Platform 4.7. Note In addition to the stable and preview Operator channels, the Red Hat OpenShift Pipelines Operator 1.4.0 comes with the ocp-4.6, ocp-4.5, and ocp-4.4 deprecated channels. These deprecated channels and support for them will be removed in the following release of Red Hat OpenShift Pipelines. 1.14.1. Compatibility and support matrix Some features in this release are currently in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP Technology Preview GA General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 1.6. Compatibility and support matrix Feature Version Support Status Pipelines 0.22 GA CLI 0.17 GA Catalog 0.22 GA Triggers 0.12 TP Pipeline resources - TP For questions and feedback, you can send an email to the product team at [email protected] . 1.14.2. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.4. The custom tasks have the following enhancements: Pipeline results can now refer to results produced by custom tasks. Custom tasks can now use workspaces, service accounts, and pod templates to build more complex custom tasks. The finally task has the following enhancements: The when expressions are supported in finally tasks, which provides efficient guarded execution and improved reusability of tasks. A finally task can be configured to consume the results of any task within the same pipeline. Note Support for when expressions and finally tasks are unavailable in the OpenShift Container Platform 4.7 web console. Support for multiple secrets of the type dockercfg or dockerconfigjson is added for authentication at runtime. Functionality to support sparse-checkout with the git-clone task is added. This enables you to clone only a subset of the repository as your local copy, and helps you to restrict the size of the cloned repositories. You can create pipeline runs in a pending state without actually starting them. In clusters that are under heavy load, this allows Operators to have control over the start time of the pipeline runs. Ensure that you set the SYSTEM_NAMESPACE environment variable manually for the controller; this was previously set by default. A non-root user is now added to the build-base image of pipelines so that git-init can clone repositories as a non-root user. Support to validate dependencies between resolved resources before a pipeline run starts is added. All result variables in the pipeline must be valid, and optional workspaces from a pipeline can only be passed to tasks expecting it for the pipeline to start running. The controller and webhook runs as a non-root group, and their superfluous capabilities have been removed to make them more secure. You can use the tkn pr logs command to see the log streams for retried task runs. You can use the --clustertask option in the tkn tr delete command to delete all the task runs associated with a particular cluster task. Support for using Knative service with the EventListener resource is added by introducing a new customResource field. An error message is displayed when an event payload does not use the JSON format. The source control interceptors such as GitLab, BitBucket, and GitHub, now use the new InterceptorRequest or InterceptorResponse type interface. A new CEL function marshalJSON is implemented so that you can encode a JSON object or an array to a string. An HTTP handler for serving the CEL and the source control core interceptors is added. It packages four core interceptors into a single HTTP server that is deployed in the tekton-pipelines namespace. The EventListener object forwards events over the HTTP server to the interceptor. Each interceptor is available at a different path. For example, the CEL interceptor is available on the /cel path. The pipelines-scc Security Context Constraint (SCC) is used with the default pipeline service account for pipelines. This new service account is similar to anyuid , but with a minor difference as defined in the YAML for SCC of OpenShift Container Platform 4.7: fsGroup: type: MustRunAs 1.14.3. Deprecated features The build-gcs sub-type in the pipeline resource storage, and the gcs-fetcher image, are not supported. In the taskRun field of cluster tasks, the label tekton.dev/task is removed. For webhooks, the value v1beta1 corresponding to the field admissionReviewVersions is removed. The creds-init helper image for building and deploying is removed. In the triggers spec and binding, the deprecated field template.name is removed in favor of template.ref . You should update all eventListener definitions to use the ref field. Note Upgrade from OpenShift Pipelines 1.3.x and earlier versions to OpenShift Pipelines 1.4.0 breaks event listeners because of the unavailability of the template.name field. For such cases, use OpenShift Pipelines 1.4.1 to avail the restored template.name field. For EventListener custom resources/objects, the fields PodTemplate and ServiceType are deprecated in favor of Resource . The deprecated spec style embedded bindings is removed. The spec field is removed from the triggerSpecBinding . The event ID representation is changed from a five-character random string to a UUID. 1.14.4. Known issues In the Developer perspective, the pipeline metrics and triggers features are available only on OpenShift Container Platform 4.7.6 or later versions. On IBM Power Systems, IBM Z, and LinuxONE, the tkn hub command is not supported. When you run Maven and Jib Maven cluster tasks on an IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters, set the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . Triggers throw error resulting from bad handling of the JSON format, if you have the following configuration in the trigger binding: params: - name: github_json value: USD(body) To resolve the issue: If you are using triggers v0.11.0 and above, use the marshalJSON CEL function, which takes a JSON object or array and returns the JSON encoding of that object or array as a string. If you are using older triggers version, add the following annotation in the trigger template: annotations: triggers.tekton.dev/old-escape-quotes: "true" When upgrading from OpenShift Pipelines 1.3.x to 1.4.x, you must recreate the routes. 1.14.5. Fixed issues Previously, the tekton.dev/task label was removed from the task runs of cluster tasks, and the tekton.dev/clusterTask label was introduced. The problems resulting from that change is resolved by fixing the clustertask describe and delete commands. In addition, the lastrun function for tasks is modified, to fix the issue of the tekton.dev/task label being applied to the task runs of both tasks and cluster tasks in older versions of pipelines. When doing an interactive tkn pipeline start pipelinename , a PipelineResource is created interactively. The tkn p start command prints the resource status if the resource status is not nil . Previously, the tekton.dev/task=name label was removed from the task runs created from cluster tasks. This fix modifies the tkn clustertask start command with the --last flag to check for the tekton.dev/task=name label in the created task runs. When a task uses an inline task specification, the corresponding task run now gets embedded in the pipeline when you run the tkn pipeline describe command, and the task name is returned as embedded. The tkn version command is fixed to display the version of the installed Tekton CLI tool, without a configured kubeConfiguration namespace or access to a cluster. If an argument is unexpected or more than one arguments are used, the tkn completion command gives an error. Previously, pipeline runs with the finally tasks nested in a pipeline specification would lose those finally tasks, when converted to the v1alpha1 version and restored back to the v1beta1 version. This error occurring during conversion is fixed to avoid potential data loss. Pipeline runs with the finally tasks nested in a pipeline specification is now serialized and stored on the alpha version, only to be deserialized later. Previously, there was an error in the pod generation when a service account had the secrets field as {} . The task runs failed with CouldntGetTask because the GET request with an empty secret name returned an error, indicating that the resource name may not be empty. This issue is fixed by avoiding an empty secret name in the kubeclient GET request. Pipelines with the v1beta1 API versions can now be requested along with the v1alpha1 version, without losing the finally tasks. Applying the returned v1alpha1 version will store the resource as v1beta1 , with the finally section restored to its original state. Previously, an unset selfLink field in the controller caused an error in the Kubernetes v1.20 clusters. As a temporary fix, the CloudEvent source field is set to a value that matches the current source URI, without the value of the auto-populated selfLink field. Previously, a secret name with dots such as gcr.io led to a task run creation failure. This happened because of the secret name being used internally as part of a volume mount name. The volume mount name conforms to the RFC1123 DNS label and disallows dots as part of the name. This issue is fixed by replacing the dot with a dash that results in a readable name. Context variables are now validated in the finally tasks. Previously, when the task run reconciler was passed a task run that did not have a status update containing the name of the pod it created, the task run reconciler listed the pods associated with the task run. The task run reconciler used the labels of the task run, which were propagated to the pod, to find the pod. Changing these labels while the task run was running, caused the code to not find the existing pod. As a result, duplicate pods were created. This issue is fixed by changing the task run reconciler to only use the tekton.dev/taskRun Tekton-controlled label when finding the pod. Previously, when a pipeline accepted an optional workspace and passed it to a pipeline task, the pipeline run reconciler stopped with an error if the workspace was not provided, even if a missing workspace binding is a valid state for an optional workspace. This issue is fixed by ensuring that the pipeline run reconciler does not fail to create a task run, even if an optional workspace is not provided. The sorted order of step statuses matches the order of step containers. Previously, the task run status was set to unknown when a pod encountered the CreateContainerConfigError reason, which meant that the task and the pipeline ran until the pod timed out. This issue is fixed by setting the task run status to false , so that the task is set as failed when the pod encounters the CreateContainerConfigError reason. Previously, pipeline results were resolved on the first reconciliation, after a pipeline run was completed. This could fail the resolution resulting in the Succeeded condition of the pipeline run being overwritten. As a result, the final status information was lost, potentially confusing any services watching the pipeline run conditions. This issue is fixed by moving the resolution of pipeline results to the end of a reconciliation, when the pipeline run is put into a Succeeded or True condition. Execution status variable is now validated. This avoids validating task results while validating context variables to access execution status. Previously, a pipeline result that contained an invalid variable would be added to the pipeline run with the literal expression of the variable intact. Therefore, it was difficult to assess whether the results were populated correctly. This issue is fixed by filtering out the pipeline run results that reference failed task runs. Now, a pipeline result that contains an invalid variable will not be emitted by the pipeline run at all. The tkn eventlistener describe command is fixed to avoid crashing without a template. It also displays the details about trigger references. Upgrades from OpenShift Pipelines 1.3.x and earlier versions to OpenShift Pipelines 1.4.0 breaks event listeners because of the unavailability of template.name . In OpenShift Pipelines 1.4.1, the template.name has been restored to avoid breaking event listeners in triggers. In OpenShift Pipelines 1.4.1, the ConsoleQuickStart custom resource has been updated to align with OpenShift Container Platform 4.7 capabilities and behavior. 1.15. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.3 1.15.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.3 is now available on OpenShift Container Platform 4.7. Red Hat OpenShift Pipelines TP 1.3 is updated to support: Tekton Pipelines 0.19.0 Tekton tkn CLI 0.15.0 Tekton Triggers 0.10.2 cluster tasks based on Tekton Catalog 0.19.0 IBM Power Systems on OpenShift Container Platform 4.7 IBM Z and LinuxONE on OpenShift Container Platform 4.7 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.3. 1.15.1.1. Pipelines Tasks that build images, such as S2I and Buildah tasks, now emit a URL of the image built that includes the image SHA. Conditions in pipeline tasks that reference custom tasks are disallowed because the Condition custom resource definition (CRD) has been deprecated. Variable expansion is now added in the Task CRD for the following fields: spec.steps[].imagePullPolicy and spec.sidecar[].imagePullPolicy . You can disable the built-in credential mechanism in Tekton by setting the disable-creds-init feature-flag to true . Resolved when expressions are now listed in the Skipped Tasks and the Task Runs sections in the Status field of the PipelineRun configuration. The git init command can now clone recursive submodules. A Task CR author can now specify a timeout for a step in the Task spec. You can now base the entry point image on the distroless/static:nonroot image and give it a mode to copy itself to the destination, without relying on the cp command being present in the base image. You can now use the configuration flag require-git-ssh-secret-known-hosts to disallow omitting known hosts in the Git SSH secret. When the flag value is set to true , you must include the known_host field in the Git SSH secret. The default value for the flag is false . The concept of optional workspaces is now introduced. A task or pipeline might declare a workspace optional and conditionally change their behavior based on its presence. A task run or pipeline run might also omit that workspace, thereby modifying the task or pipeline behavior. The default task run workspaces are not added in place of an omitted optional workspace. Credentials initialization in Tekton now detects an SSH credential that is used with a non-SSH URL, and vice versa in Git pipeline resources, and logs a warning in the step containers. The task run controller emits a warning event if the affinity specified by the pod template is overwritten by the affinity assistant. The task run reconciler now records metrics for cloud events that are emitted once a task run is completed. This includes retries. 1.15.1.2. Pipelines CLI Support for --no-headers flag is now added to the following commands: tkn condition list , tkn triggerbinding list , tkn eventlistener list , tkn clustertask list , tkn clustertriggerbinding list . When used together, the --last or --use options override the --prefix-name and --timeout options. The tkn eventlistener logs command is now added to view the EventListener logs. The tekton hub commands are now integrated into the tkn CLI. The --nocolour option is now changed to --no-color . The --all-namespaces flag is added to the following commands: tkn triggertemplate list , tkn condition list , tkn triggerbinding list , tkn eventlistener list . 1.15.1.3. Triggers You can now specify your resource information in the EventListener template. It is now mandatory for EventListener service accounts to have the list and watch verbs, in addition to the get verb for all the triggers resources. This enables you to use Listers to fetch data from EventListener , Trigger , TriggerBinding , TriggerTemplate , and ClusterTriggerBinding resources. You can use this feature to create a Sink object rather than specifying multiple informers, and directly make calls to the API server. A new Interceptor interface is added to support immutable input event bodies. Interceptors can now add data or fields to a new extensions field, and cannot modify the input bodies making them immutable. The CEL interceptor uses this new Interceptor interface. A namespaceSelector field is added to the EventListener resource. Use it to specify the namespaces from where the EventListener resource can fetch the Trigger object for processing events. To use the namespaceSelector field, the service account for the EventListener resource must have a cluster role. The triggers EventListener resource now supports end-to-end secure connection to the eventlistener pod. The escaping parameters behavior in the TriggerTemplates resource by replacing " with \" is now removed. A new resources field, supporting Kubernetes resources, is introduced as part of the EventListener spec. A new functionality for the CEL interceptor, with support for upper and lower-casing of ASCII strings, is added. You can embed TriggerBinding resources by using the name and value fields in a trigger, or an event listener. The PodSecurityPolicy configuration is updated to run in restricted environments. It ensures that containers must run as non-root. In addition, the role-based access control for using the pod security policy is moved from cluster-scoped to namespace-scoped. This ensures that the triggers cannot use other pod security policies that are unrelated to a namespace. Support for embedded trigger templates is now added. You can either use the name field to refer to an embedded template or embed the template inside the spec field. 1.15.2. Deprecated features Pipeline templates that use PipelineResources CRDs are now deprecated and will be removed in a future release. The template.name field is deprecated in favor of the template.ref field and will be removed in a future release. The -c shorthand for the --check command has been removed. In addition, global tkn flags are added to the version command. 1.15.3. Known issues CEL overlays add fields to a new top-level extensions function, instead of modifying the incoming event body. TriggerBinding resources can access values within this new extensions function using the USD(extensions.<key>) syntax. Update your binding to use the USD(extensions.<key>) syntax instead of the USD(body.<overlay-key>) syntax. The escaping parameters behavior by replacing " with \" is now removed. If you need to retain the old escaping parameters behavior add the tekton.dev/old-escape-quotes: true" annotation to your TriggerTemplate specification. You can embed TriggerBinding resources by using the name and value fields inside a trigger or an event listener. However, you cannot specify both name and ref fields for a single binding. Use the ref field to refer to a TriggerBinding resource and the name field for embedded bindings. An interceptor cannot attempt to reference a secret outside the namespace of an EventListener resource. You must include secrets in the namespace of the `EventListener`resource. In Triggers 0.9.0 and later, if a body or header based TriggerBinding parameter is missing or malformed in an event payload, the default values are used instead of displaying an error. Tasks and pipelines created with WhenExpression objects using Tekton Pipelines 0.16.x must be reapplied to fix their JSON annotations. When a pipeline accepts an optional workspace and gives it to a task, the pipeline run stalls if the workspace is not provided. To use the Buildah cluster task in a disconnected environment, ensure that the Dockerfile uses an internal image stream as the base image, and then use it in the same manner as any S2I cluster task. 1.15.4. Fixed issues Extensions added by a CEL Interceptor are passed on to webhook interceptors by adding the Extensions field within the event body. The activity timeout for log readers is now configurable using the LogOptions field. However, the default behavior of timeout in 10 seconds is retained. The log command ignores the --follow flag when a task run or pipeline run is complete, and reads available logs instead of live logs. References to the following Tekton resources: EventListener , TriggerBinding , ClusterTriggerBinding , Condition , and TriggerTemplate are now standardized and made consistent across all user-facing messages in tkn commands. Previously, if you started a canceled task run or pipeline run with the --use-taskrun <canceled-task-run-name> , --use-pipelinerun <canceled-pipeline-run-name> or --last flags, the new run would be canceled. This bug is now fixed. The tkn pr desc command is now enhanced to ensure that it does not fail in case of pipeline runs with conditions. When you delete a task run using the tkn tr delete command with the --task option, and a cluster task exists with the same name, the task runs for the cluster task also get deleted. As a workaround, filter the task runs by using the TaskRefKind field. The tkn triggertemplate describe command would display only part of the apiVersion value in the output. For example, only triggers.tekton.dev was displayed instead of triggers.tekton.dev/v1alpha1 . This bug is now fixed. The webhook, under certain conditions, would fail to acquire a lease and not function correctly. This bug is now fixed. Pipelines with when expressions created in v0.16.3 can now be run in v0.17.1 and later. After an upgrade, you do not need to reapply pipeline definitions created in versions because both the uppercase and lowercase first letters for the annotations are now supported. By default, the leader-election-ha field is now enabled for high availability. When the disable-ha controller flag is set to true , it disables high availability support. Issues with duplicate cloud events are now fixed. Cloud events are now sent only when a condition changes the state, reason, or message. When a service account name is missing from a PipelineRun or TaskRun spec, the controller uses the service account name from the config-defaults config map. If the service account name is also missing in the config-defaults config map, the controller now sets it to default in the spec. Validation for compatibility with the affinity assistant is now supported when the same persistent volume claim is used for multiple workspaces, but with different subpaths. 1.16. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.2 1.16.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.2 is now available on OpenShift Container Platform 4.6. Red Hat OpenShift Pipelines TP 1.2 is updated to support: Tekton Pipelines 0.16.3 Tekton tkn CLI 0.13.1 Tekton Triggers 0.8.1 cluster tasks based on Tekton Catalog 0.16 IBM Power Systems on OpenShift Container Platform 4.6 IBM Z and LinuxONE on OpenShift Container Platform 4.6 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.2. 1.16.1.1. Pipelines This release of Red Hat OpenShift Pipelines adds support for a disconnected installation. Note Installations in restricted environments are currently not supported on IBM Power Systems, IBM Z, and LinuxONE. You can now use the when field, instead of conditions resource, to run a task only when certain criteria are met. The key components of WhenExpression resources are Input , Operator , and Values . If all the when expressions evaluate to True , then the task is run. If any of the when expressions evaluate to False , the task is skipped. Step statuses are now updated if a task run is canceled or times out. Support for Git Large File Storage (LFS) is now available to build the base image used by git-init . You can now use the taskSpec field to specify metadata, such as labels and annotations, when a task is embedded in a pipeline. Cloud events are now supported by pipeline runs. Retries with backoff are now enabled for cloud events sent by the cloud event pipeline resource. You can now set a default Workspace configuration for any workspace that a Task resource declares, but that a TaskRun resource does not explicitly provide. Support is available for namespace variable interpolation for the PipelineRun namespace and TaskRun namespace. Validation for TaskRun objects is now added to check that not more than one persistent volume claim workspace is used when a TaskRun resource is associated with an Affinity Assistant. If more than one persistent volume claim workspace is used, the task run fails with a TaskRunValidationFailed condition. Note that by default, the Affinity Assistant is disabled in Red Hat OpenShift Pipelines, so you will need to enable the assistant to use it. 1.16.1.2. Pipelines CLI The tkn task describe , tkn taskrun describe , tkn clustertask describe , tkn pipeline describe , and tkn pipelinerun describe commands now: Automatically select the Task , TaskRun , ClusterTask , Pipeline and PipelineRun resource, respectively, if only one of them is present. Display the results of the Task , TaskRun , ClusterTask , Pipeline and PipelineRun resource in their outputs, respectively. Display workspaces declared in the Task , TaskRun , ClusterTask , Pipeline and PipelineRun resource in their outputs, respectively. You can now use the --prefix-name option with the tkn clustertask start command to specify a prefix for the name of a task run. Interactive mode support has now been provided to the tkn clustertask start command. You can now specify PodTemplate properties supported by pipelines using local or remote file definitions for TaskRun and PipelineRun objects. You can now use the --use-params-defaults option with the tkn clustertask start command to use the default values set in the ClusterTask configuration and create the task run. The --use-param-defaults flag for the tkn pipeline start command now prompts the interactive mode if the default values have not been specified for some of the parameters. 1.16.1.3. Triggers The Common Expression Language (CEL) function named parseYAML has been added to parse a YAML string into a map of strings. Error messages for parsing CEL expressions have been improved to make them more granular while evaluating expressions and when parsing the hook body for creating the evaluation environment. Support is now available for marshaling boolean values and maps if they are used as the values of expressions in a CEL overlay mechanism. The following fields have been added to the EventListener object: The replicas field enables the event listener to run more than one pod by specifying the number of replicas in the YAML file. The NodeSelector field enables the EventListener object to schedule the event listener pod to a specific node. Webhook interceptors can now parse the EventListener-Request-URL header to extract parameters from the original request URL being handled by the event listener. Annotations from the event listener can now be propagated to the deployment, services, and other pods. Note that custom annotations on services or deployment are overwritten, and hence, must be added to the event listener annotations so that they are propagated. Proper validation for replicas in the EventListener specification is now available for cases when a user specifies the spec.replicas values as negative or zero . You can now specify the TriggerCRD object inside the EventListener spec as a reference using the TriggerRef field to create the TriggerCRD object separately and then bind it inside the EventListener spec. Validation and defaults for the TriggerCRD object are now available. 1.16.2. Deprecated features USD(params) parameters are now removed from the triggertemplate resource and replaced by USD(tt.params) to avoid confusion between the resourcetemplate and triggertemplate resource parameters. The ServiceAccount reference of the optional EventListenerTrigger -based authentication level has changed from an object reference to a ServiceAccountName string. This ensures that the ServiceAccount reference is in the same namespace as the EventListenerTrigger object. The Conditions custom resource definition (CRD) is now deprecated; use the WhenExpressions CRD instead. The PipelineRun.Spec.ServiceAccountNames object is being deprecated and replaced by the PipelineRun.Spec.TaskRunSpec[].ServiceAccountName object. 1.16.3. Known issues This release of Red Hat OpenShift Pipelines adds support for a disconnected installation. However, some images used by the cluster tasks must be mirrored for them to work in disconnected clusters. Pipelines in the openshift namespace are not deleted after you uninstall the Red Hat OpenShift Pipelines Operator. Use the oc delete pipelines -n openshift --all command to delete the pipelines. Uninstalling the Red Hat OpenShift Pipelines Operator does not remove the event listeners. As a workaround, to remove the EventListener and Pod CRDs: Edit the EventListener object with the foregroundDeletion finalizers: USD oc patch el/<eventlistener_name> -p '{"metadata":{"finalizers":["foregroundDeletion"]}}' --type=merge For example: USD oc patch el/github-listener-interceptor -p '{"metadata":{"finalizers":["foregroundDeletion"]}}' --type=merge Delete the EventListener CRD: USD oc patch crd/eventlisteners.triggers.tekton.dev -p '{"metadata":{"finalizers":[]}}' --type=merge When you run a multi-arch container image task without command specification on an IBM Power Systems (ppc64le) or IBM Z (s390x) cluster, the TaskRun resource fails with the following error: Error executing command: fork/exec /bin/bash: exec format error As a workaround, use an architecture specific container image or specify the sha256 digest to point to the correct architecture. To get the sha256 digest enter: USD skopeo inspect --raw <image_name>| jq '.manifests[] | select(.platform.architecture == "<architecture>") | .digest' 1.16.4. Fixed issues A simple syntax validation to check the CEL filter, overlays in the Webhook validator, and the expressions in the interceptor has now been added. Triggers no longer overwrite annotations set on the underlying deployment and service objects. Previously, an event listener would stop accepting events. This fix adds an idle timeout of 120 seconds for the EventListener sink to resolve this issue. Previously, canceling a pipeline run with a Failed(Canceled) state gave a success message. This has been fixed to display an error instead. The tkn eventlistener list command now provides the status of the listed event listeners, thus enabling you to easily identify the available ones. Consistent error messages are now displayed for the triggers list and triggers describe commands when triggers are not installed or when a resource cannot be found. Previously, a large number of idle connections would build up during cloud event delivery. The DisableKeepAlives: true parameter was added to the cloudeventclient config to fix this issue. Thus, a new connection is set up for every cloud event. Previously, the creds-init code would write empty files to the disk even if credentials of a given type were not provided. This fix modifies the creds-init code to write files for only those credentials that have actually been mounted from correctly annotated secrets. 1.17. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.1 1.17.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.1 is now available on OpenShift Container Platform 4.5. Red Hat OpenShift Pipelines TP 1.1 is updated to support: Tekton Pipelines 0.14.3 Tekton tkn CLI 0.11.0 Tekton Triggers 0.6.1 cluster tasks based on Tekton Catalog 0.14 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.1. 1.17.1.1. Pipelines Workspaces can now be used instead of pipeline resources. It is recommended that you use workspaces in OpenShift Pipelines, as pipeline resources are difficult to debug, limited in scope, and make tasks less reusable. For more details on workspaces, see the Understanding OpenShift Pipelines section. Workspace support for volume claim templates has been added: The volume claim template for a pipeline run and task run can now be added as a volume source for workspaces. The tekton-controller then creates a persistent volume claim (PVC) using the template that is seen as a PVC for all task runs in the pipeline. Thus you do not need to define the PVC configuration every time it binds a workspace that spans multiple tasks. Support to find the name of the PVC when a volume claim template is used as a volume source is now available using variable substitution. Support for improving audits: The PipelineRun.Status field now contains the status of every task run in the pipeline and the pipeline specification used to instantiate a pipeline run to monitor the progress of the pipeline run. Pipeline results have been added to the pipeline specification and PipelineRun status. The TaskRun.Status field now contains the exact task specification used to instantiate the TaskRun resource. Support to apply the default parameter to conditions. A task run created by referencing a cluster task now adds the tekton.dev/clusterTask label instead of the tekton.dev/task label. The kube config writer now adds the ClientKeyData and the ClientCertificateData configurations in the resource structure to enable replacement of the pipeline resource type cluster with the kubeconfig-creator task. The names of the feature-flags and the config-defaults config maps are now customizable. Support for the host network in the pod template used by the task run is now available. An Affinity Assistant is now available to support node affinity in task runs that share workspace volume. By default, this is disabled on OpenShift Pipelines. The pod template has been updated to specify imagePullSecrets to identify secrets that the container runtime should use to authorize container image pulls when starting a pod. Support for emitting warning events from the task run controller if the controller fails to update the task run. Standard or recommended k8s labels have been added to all resources to identify resources belonging to an application or component. The Entrypoint process is now notified for signals and these signals are then propagated using a dedicated PID Group of the Entrypoint process. The pod template can now be set on a task level at runtime using task run specs. Support for emitting Kubernetes events: The controller now emits events for additional task run lifecycle events - taskrun started and taskrun running . The pipeline run controller now emits an event every time a pipeline starts. In addition to the default Kubernetes events, support for cloud events for task runs is now available. The controller can be configured to send any task run events, such as create, started, and failed, as cloud events. Support for using the USDcontext.<task|taskRun|pipeline|pipelineRun>.name variable to reference the appropriate name when in pipeline runs and task runs. Validation for pipeline run parameters is now available to ensure that all the parameters required by the pipeline are provided by the pipeline run. This also allows pipeline runs to provide extra parameters in addition to the required parameters. You can now specify tasks within a pipeline that will always execute before the pipeline exits, either after finishing all tasks successfully or after a task in the pipeline failed, using the finally field in the pipeline YAML file. The git-clone cluster task is now available. 1.17.1.2. Pipelines CLI Support for embedded trigger binding is now available to the tkn evenlistener describe command. Support to recommend subcommands and make suggestions if an incorrect subcommand is used. The tkn task describe command now auto selects the task if only one task is present in the pipeline. You can now start a task using default parameter values by specifying the --use-param-defaults flag in the tkn task start command. You can now specify a volume claim template for pipeline runs or task runs using the --workspace option with the tkn pipeline start or tkn task start commands. The tkn pipelinerun logs command now displays logs for the final tasks listed in the finally section. Interactive mode support has now been provided to the tkn task start command and the describe subcommand for the following tkn resources: pipeline , pipelinerun , task , taskrun , clustertask , and pipelineresource . The tkn version command now displays the version of the triggers installed in the cluster. The tkn pipeline describe command now displays parameter values and timeouts specified for tasks used in the pipeline. Support added for the --last option for the tkn pipelinerun describe and the tkn taskrun describe commands to describe the most recent pipeline run or task run, respectively. The tkn pipeline describe command now displays the conditions applicable to the tasks in the pipeline. You can now use the --no-headers and --all-namespaces flags with the tkn resource list command. 1.17.1.3. Triggers The following Common Expression Language (CEL) functions are now available: parseURL to parse and extract portions of a URL parseJSON to parse JSON value types embedded in a string in the payload field of the deployment webhook A new interceptor for webhooks from Bitbucket has been added. Event listeners now display the Address URL and the Available status as additional fields when listed with the kubectl get command. trigger template params now use the USD(tt.params.<paramName>) syntax instead of USD(params.<paramName>) to reduce the confusion between trigger template and resource templates params. You can now add tolerations in the EventListener CRD to ensure that event listeners are deployed with the same configuration even if all nodes are tainted due to security or management issues. You can now add a Readiness Probe for event listener Deployment at URL/live . Support for embedding TriggerBinding specifications in event listener triggers is now added. Trigger resources are now annotated with the recommended app.kubernetes.io labels. 1.17.2. Deprecated features The following items are deprecated in this release: The --namespace or -n flags for all cluster-wide commands, including the clustertask and clustertriggerbinding commands, are deprecated. It will be removed in a future release. The name field in triggers.bindings within an event listener has been deprecated in favor of the ref field and will be removed in a future release. Variable interpolation in trigger templates using USD(params) has been deprecated in favor of using USD(tt.params) to reduce confusion with the pipeline variable interpolation syntax. The USD(params.<paramName>) syntax will be removed in a future release. The tekton.dev/task label is deprecated on cluster tasks. The TaskRun.Status.ResourceResults.ResourceRef field is deprecated and will be removed. The tkn pipeline create , tkn task create , and tkn resource create -f subcommands have been removed. Namespace validation has been removed from tkn commands. The default timeout of 1h and the -t flag for the tkn ct start command have been removed. The s2i cluster task has been deprecated. 1.17.3. Known issues Conditions do not support workspaces. The --workspace option and the interactive mode is not supported for the tkn clustertask start command. Support of backward compatibility for USD(params.<paramName>) syntax forces you to use trigger templates with pipeline specific params as the trigger s webhook is unable to differentiate trigger params from pipelines params. Pipeline metrics report incorrect values when you run a promQL query for tekton_taskrun_count and tekton_taskrun_duration_seconds_count . pipeline runs and task runs continue to be in the Running and Running(Pending) states respectively even when a non existing PVC name is given to a workspace. 1.17.4. Fixed issues Previously, the tkn task delete <name> --trs command would delete both the task and cluster task if the name of the task and cluster task were the same. With this fix, the command deletes only the task runs that are created by the task <name> . Previously the tkn pr delete -p <name> --keep 2 command would disregard the -p flag when used with the --keep flag and would delete all the pipeline runs except the latest two. With this fix, the command deletes only the pipeline runs that are created by the pipeline <name> , except for the latest two. The tkn triggertemplate describe output now displays resource templates in a table format instead of YAML format. Previously the buildah cluster task failed when a new user was added to a container. With this fix, the issue has been resolved. 1.18. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.0 1.18.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.0 is now available on OpenShift Container Platform 4.4. Red Hat OpenShift Pipelines TP 1.0 is updated to support: Tekton Pipelines 0.11.3 Tekton tkn CLI 0.9.0 Tekton Triggers 0.4.0 cluster tasks based on Tekton Catalog 0.11 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.0. 1.18.1.1. Pipelines Support for v1beta1 API Version. Support for an improved limit range. Previously, limit range was specified exclusively for the task run and the pipeline run. Now there is no need to explicitly specify the limit range. The minimum limit range across the namespace is used. Support for sharing data between tasks using task results and task params. Pipelines can now be configured to not overwrite the HOME environment variable and the working directory of steps. Similar to task steps, sidecars now support script mode. You can now specify a different scheduler name in task run podTemplate resource. Support for variable substitution using Star Array Notation. Tekton controller can now be configured to monitor an individual namespace. A new description field is now added to the specification of pipelines, tasks, cluster tasks, resources, and conditions. Addition of proxy parameters to Git pipeline resources. 1.18.1.2. Pipelines CLI The describe subcommand is now added for the following tkn resources: EventListener , Condition , TriggerTemplate , ClusterTask , and TriggerSBinding . Support added for v1beta1 to the following resources along with backward compatibility for v1alpha1 : ClusterTask , Task , Pipeline , PipelineRun , and TaskRun . The following commands can now list output from all namespaces using the --all-namespaces flag option: tkn task list , tkn pipeline list , tkn taskrun list , tkn pipelinerun list The output of these commands is also enhanced to display information without headers using the --no-headers flag option. You can now start a pipeline using default parameter values by specifying --use-param-defaults flag in the tkn pipelines start command. Support for workspace is now added to tkn pipeline start and tkn task start commands. A new clustertriggerbinding command is now added with the following subcommands: describe , delete , and list . You can now directly start a pipeline run using a local or remote yaml file. The describe subcommand now displays an enhanced and detailed output. With the addition of new fields, such as description , timeout , param description , and sidecar status , the command output now provides more detailed information about a specific tkn resource. The tkn task log command now displays logs directly if only one task is present in the namespace. 1.18.1.3. Triggers Triggers can now create both v1alpha1 and v1beta1 pipeline resources. Support for new Common Expression Language (CEL) interceptor function - compareSecret . This function securely compares strings to secrets in CEL expressions. Support for authentication and authorization at the event listener trigger level. 1.18.2. Deprecated features The following items are deprecated in this release: The environment variable USDHOME , and variable workingDir in the Steps specification are deprecated and might be changed in a future release. Currently in a Step container, the HOME and workingDir variables are overwritten to /tekton/home and /workspace variables, respectively. In a later release, these two fields will not be modified, and will be set to values defined in the container image and the Task YAML. For this release, use the disable-home-env-overwrite and disable-working-directory-overwrite flags to disable overwriting of the HOME and workingDir variables. The following commands are deprecated and might be removed in the future release: tkn pipeline create , tkn task create . The -f flag with the tkn resource create command is now deprecated. It might be removed in the future release. The -t flag and the --timeout flag (with seconds format) for the tkn clustertask create command are now deprecated. Only duration timeout format is now supported, for example 1h30s . These deprecated flags might be removed in the future release. 1.18.3. Known issues If you are upgrading from an older version of Red Hat OpenShift Pipelines, you must delete your existing deployments before upgrading to Red Hat OpenShift Pipelines version 1.0. To delete an existing deployment, you must first delete Custom Resources and then uninstall the Red Hat OpenShift Pipelines Operator. For more details, see the uninstalling Red Hat OpenShift Pipelines section. Submitting the same v1alpha1 tasks more than once results in an error. Use the oc replace command instead of oc apply when re-submitting a v1alpha1 task. The buildah cluster task does not work when a new user is added to a container. When the Operator is installed, the --storage-driver flag for the buildah cluster task is not specified, therefore the flag is set to its default value. In some cases, this causes the storage driver to be set incorrectly. When a new user is added, the incorrect storage-driver results in the failure of the buildah cluster task with the following error: As a workaround, manually set the --storage-driver flag value to overlay in the buildah-task.yaml file: Login to your cluster as a cluster-admin : Use the oc edit command to edit buildah cluster task: The current version of the buildah clustertask YAML file opens in the editor set by your EDITOR environment variable. Under the Steps field, locate the following command field: Replace the command field with the following: Save the file and exit. Alternatively, you can also modify the buildah cluster task YAML file directly on the web console by navigating to Pipelines Cluster Tasks buildah . Select Edit Cluster Task from the Actions menu and replace the command field as shown in the procedure. 1.18.4. Fixed issues Previously, the DeploymentConfig task triggered a new deployment build even when an image build was already in progress. This caused the deployment of the pipeline to fail. With this fix, the deploy task command is now replaced with the oc rollout status command which waits for the in-progress deployment to finish. Support for APP_NAME parameter is now added in pipeline templates. Previously, the pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image pipeline resources instead of the user provided IMAGE_NAME parameter. All the OpenShift Pipelines images are now based on the Red Hat Universal Base Images (UBI). Previously, when the pipeline was installed in a namespace other than tekton-pipelines , the tkn version command displayed the pipeline version as unknown . With this fix, the tkn version command now displays the correct pipeline version in any namespace. The -c flag is no longer supported for the tkn version command. Non-admin users can now list the cluster trigger bindings. The event listener CompareSecret function is now fixed for the CEL Interceptor. The list , describe , and start subcommands for tasks and cluster tasks now correctly display the output in case a task and cluster task have the same name. Previously, the OpenShift Pipelines Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed. In the tekton-pipelines namespace, the timeouts of all task runs and pipeline runs are now set to the value of default-timeout-minutes field using the config map. Previously, the pipelines section in the web console was not displayed for non-admin users. This issue is now resolved.
[ "apiVersion: tekton.dev/v1 kind: Task metadata: name: test-task spec: steps: - name: fetch-repository stepRef: resolver: git params: - name: url value: https://github.com/tektoncd/catalog.git - name: revision value: main - name: pathInRepo value: stepaction/git-clone/0.1/git-clone params: - name: url value: USD(params.repo-url) - name: revision value: USD(params.tag-name) - name: output-path value: USD(workspaces.output.path)", "apiVersion: tekton.dev/v1 kind: Task metadata: generateName: something- spec: params: - name: myWorkspaceSecret steps: - image: registry.redhat.io/ubi/ubi8-minimal:latest script: | echo \"Hello World\" workspaces: - name: myworkspace secret: secretName: USD(params.myWorkspaceSecret)", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: options: configMaps: config-defaults: data: default-imagepullbackoff-timeout: \"5m\"", "apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: create-configmap-template spec: params: - name: action resourcetemplates: - apiVersion: v1 kind: ConfigMap metadata: generateName: sample- data: field: \"Action is : USD(tt.params.action)\"", "apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: simple-eventlistener spec: serviceAccountName: simple-tekton-robot triggers: - name: simple-trigger bindings: - ref: simple-binding template: ref: simple-template resources: kubernetesResource: serviceType: NodePort servicePort: 38080", "apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: listener-loadbalancerclass spec: serviceAccountName: tekton-triggers-example-sa triggers: - name: example-trig bindings: - ref: pipeline-binding - ref: message-binding template: ref: pipeline-template resources: kubernetesResource: serviceType: LoadBalancer serviceLoadBalancerClass: private", "/test pipelinerun1 revision=main param1=\"value1\" param2=\"value \\\"value2\\\" with quotes\"", "/test checker target_branch=backport-branch", "apiVersion: operator.tekton.dev/v1 kind: TektonResult metadata: name: result spec: options: deployments: tekton-results-watcher: spec: template: spec: containers: - name: watcher args: - \"--updateLogTimeout=60s\"", "oc get tektoninstallersets", "oc delete tektoninstallerset <installerset_name>", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pr-v1 spec: pipelineSpec: tasks: - name: noop-task taskSpec: steps: - name: noop-task image: registry.access.redhat.com/ubi9/ubi-micro script: | exit 0 taskRunTemplate: podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: pr-v1beta1 spec: pipelineSpec: tasks: - name: noop-task taskSpec: steps: - name: noop-task image: registry.access.redhat.com/ubi9/ubi-micro script: | exit 0 podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001", "apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: remote-task-reference spec: taskRef: resolver: http params: - name: url value: https://raw.githubusercontent.com/tektoncd-catalog/git-clone/main/task/git-clone/git-clone.yaml", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: http-demo spec: pipelineRef: resolver: http params: - name: url value: https://raw.githubusercontent.com/tektoncd/catalog/main/pipeline/build-push-gke-deploy/0.1/build-push-gke-deploy.yaml", "apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: pipeline-param-enum spec: params: - name: message enum: [\"v1\", \"v2\"] default: \"v1\"", "apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: git-api-demo-tr spec: taskRef: resolver: git params: - name: org value: tektoncd - name: repo value: catalog - name: revision value: main - name: pathInRepo value: task/git-clone/0.6/git-clone.yaml # create the my-secret-token secret in the namespace where the # pipelinerun is created. The secret must contain a GitHub personal access # token in the token key of the secret. - name: token value: my-secret-token - name: tokenKey value: token - name: scmType value: github - name: serverURL value: https://ghe.mycompany.com", "\".translate(\"[^a-z0-9]+\", \"ABC\")", "This is USDan Invalid5String", "ABChisABCisABCanABCnvalid5ABCtring", "\"data_type==TASK_RUN && (data.spec.pipelineSpec.tasks[0].name=='hello'||data.metadata.name=='hello')\"", "apiVersion: tekton.dev/v1 kind: Task metadata: name: uid-task spec: results: - name: uid steps: - name: uid image: alpine command: [\"/bin/sh\", \"-c\"] args: - echo \"1001\" | tee USD(results.uid.path) --- apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: uid-pipeline-run spec: pipelineSpec: tasks: - name: add-uid taskRef: name: uid-task - name: show-uid taskSpec: steps: - name: show-uid image: alpine command: [\"/bin/sh\", \"-c\"] args: - echo USD(tasks.add-uid.results.uid)", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun spec: params: - name: source_url value: \"{{ source_url }}\" pipelineSpec: params: - name: source_url", "oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print USD1}' | xargs oc delete tektoninstallersets", "oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": false}}}}}'", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: enable-bundles-resolver: true enable-cluster-resolver: true enable-git-resolver: true enable-hub-resolver: true", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: bundles-resolver-config: default-service-account: pipelines cluster-resolver-config: default-namespace: test git-resolver-config: server-url: localhost.com hub-resolver-config: default-tekton-hub-catalog: tekton", "annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && \"docs/*.md\".pathChanged()", "yaml kind: PipelineRun spec: timeouts: pipeline: \"0\" # No timeout tasks: \"0h3m0s\"", "- name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>'", "- name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/USD(context.pipelineRun.name)'", "kind: Task apiVersion: tekton.dev/v1beta1 metadata: name: write-array annotations: description: | A simple task that writes array spec: results: - name: array-results type: array description: The array results", "echo -n \"[\\\"hello\\\",\\\"world\\\"]\" | tee USD(results.array-results.path)", "apiVersion: v1 kind: Secret metadata: name: tekton-hub-db labels: app: tekton-hub-db type: Opaque stringData: POSTGRES_HOST: <hostname> POSTGRES_DB: <database_name> POSTGRES_USER: <username> POSTGRES_PASSWORD: <password> POSTGRES_PORT: <listening_port_number>", "annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch == \"main\" && source_branch == \"wip\"", "apiVersion: v1 kind: ConfigMap metadata: name: config-observability namespace: tekton-pipelines labels: app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines data: _example: | metrics.taskrun.level: \"task\" metrics.taskrun.duration-type: \"histogram\" metrics.pipelinerun.level: \"pipeline\" metrics.pipelinerun.duration-type: \"histogram\"", "oc get route -n openshift-pipelines pipelines-as-code-controller --template='https://{{ .spec.host }}'", "error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io \"openshift-operators-prometheus-k8s-read-binding\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"Role\", Name:\"openshift-operator-read\"}: cannot change roleRef", "Error: error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time=\"2022-03-04T09:47:57Z\" level=error msg=\"error writing \\\"0 0 4294967295\\\\n\\\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted\" time=\"2022-03-04T09:47:57Z\" level=error msg=\"(unable to determine exit status)\"", "securityContext: capabilities: add: [\"SETFCAP\"]", "oc get tektoninstallerset NAME READY REASON addon-clustertasks-nx5xz False Error addon-communityclustertasks-cfb2p True addon-consolecli-ftrb8 True addon-openshift-67dj2 True addon-pac-cf7pz True addon-pipelines-fvllm True addon-triggers-b2wtt True addon-versioned-clustertasks-1-8-hqhnw False Error pipeline-w75ww True postpipeline-lrs22 True prepipeline-ldlhw True rhosp-rbac-4dmgb True trigger-hfg64 True validating-mutating-webhoook-28rf7 True", "oc get tektonconfig config NAME VERSION READY REASON config 1.8.1 True", "tkn pipeline export test_pipeline -n openshift-pipelines", "tkn pipelinerun export test_pipeline_run -n openshift-pipelines", "spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"true\" - name: pipelineTemplates value: \"true\" - name: communityClusterTasks value: \"false\"", "hub: params: - name: enable-devconsole-integration value: \"true\"", "STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP \"RUN /usr/libexec/s2i/assemble\": exit status 127 time=\"2021-11-04T13:05:26Z\" level=error msg=\"exit status 127\"", "error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io \"openshift-operators-prometheus-k8s-read-binding\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"Role\", Name:\"openshift-operator-read\"}: cannot change roleRef", "Error: error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time=\"2022-03-04T09:47:57Z\" level=error msg=\"error writing \\\"0 0 4294967295\\\\n\\\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted\" time=\"2022-03-04T09:47:57Z\" level=error msg=\"(unable to determine exit status)\"", "securityContext: capabilities: add: [\"SETFCAP\"]", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false", "STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP \"RUN /usr/libexec/s2i/assemble\": exit status 127 time=\"2021-11-04T13:05:26Z\" level=error msg=\"exit status 127\"", "Error from server (InternalError): Internal error occurred: failed calling webhook \"validation.webhook.pipeline.tekton.dev\": Post \"https://tekton-pipelines-webhook.openshift-pipelines.svc:443/resource-validation?timeout=10s\": service \"tekton-pipelines-webhook\" not found.", "oc get route -n <namespace>", "oc edit route -n <namespace> <el-route_name>", "spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: 8000 to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None", "spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: http-listener to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None", "pruner: resources: - pipelinerun - taskrun schedule: \"*/5 * * * *\" # cron schedule keep: 2 # delete all keeping n", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"true\" - name: pipelineTemplates value: \"true\"", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines pipeline: params: - name: enableMetrics value: \"true\"", "tkn pipeline start build-and-deploy -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.15/01_pipeline/03_persistent_volume_claim.yaml -p deployment-name=pipelines-vote-api -p git-url=https://github.com/openshift/pipelines-vote-api.git -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api --use-param-defaults", "- name: deploy params: - name: SCRIPT value: oc rollout status <deployment-name> runAfter: - build taskRef: kind: ClusterTask name: openshift-client", "steps: - name: git env: - name: HOME value: /root image: USD(params.BASE_IMAGE) workingDir: USD(workspaces.source.path)", "fsGroup: type: MustRunAs", "params: - name: github_json value: USD(body)", "annotations: triggers.tekton.dev/old-escape-quotes: \"true\"", "oc patch el/<eventlistener_name> -p '{\"metadata\":{\"finalizers\":[\"foregroundDeletion\"]}}' --type=merge", "oc patch el/github-listener-interceptor -p '{\"metadata\":{\"finalizers\":[\"foregroundDeletion\"]}}' --type=merge", "oc patch crd/eventlisteners.triggers.tekton.dev -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge", "Error executing command: fork/exec /bin/bash: exec format error", "skopeo inspect --raw <image_name>| jq '.manifests[] | select(.platform.architecture == \"<architecture>\") | .digest'", "useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later.", "oc login -u <login> -p <password> https://openshift.example.com:6443", "oc edit clustertask buildah", "command: ['buildah', 'bud', '--format=USD(params.FORMAT)', '--tls-verify=USD(params.TLSVERIFY)', '--layers', '-f', 'USD(params.DOCKERFILE)', '-t', 'USD(resources.outputs.image.url)', 'USD(params.CONTEXT)']", "command: ['buildah', '--storage-driver=overlay', 'bud', '--format=USD(params.FORMAT)', '--tls-verify=USD(params.TLSVERIFY)', '--no-cache', '-f', 'USD(params.DOCKERFILE)', '-t', 'USD(params.IMAGE)', 'USD(params.CONTEXT)']" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/about_openshift_pipelines/op-release-notes
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/proc_providing-feedback-on-red-hat-documentation
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/using_your_subscription
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/upgrading_data_grid/rhdg-downloads_datagrid
Chapter 5. Technology previews
Chapter 5. Technology previews This section describes the technology preview features introduced in Red Hat OpenShift Data Foundation 4.18 under Technology Preview support limitations. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Technology Preview features are provided with a limited support scope, as detailed on the Customer Portal: Technology Preview Features Support Scope . 5.1. Multi-Volume Consistency for Backup - CephFS and block Multi-volume consistency which provides crash consistent multi-volume consistency groups for backup solutions can be used by applications that are deployed over multiple volumes. This provides support for OpenShift Virtualization and helps to better support applications. Red Hat OpenShift Data Foundation is the first storage vendor that implements this new and important CSI feature. For more information, see the knowledgebase article CephFS VolumeGroupSnapshot in OpenShift Data Foundation . 5.2. More disaster recovery recipe capabilities for CephFS-based applications The capabilities of disaster recovery recipes are enhanced to support more applications. Support for CephFS baked applications is in technology preview status for this release.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/4.18_release_notes/technology_previews
7.6. Understanding Audit Log Files
7.6. Understanding Audit Log Files By default, the Audit system stores log entries in the /var/log/audit/audit.log file; if log rotation is enabled, rotated audit.log files are stored in the same directory. The following Audit rule logs every attempt to read or modify the /etc/ssh/sshd_config file: If the auditd daemon is running, for example, using the following command creates a new event in the Audit log file: This event in the audit.log file looks as follows: The above event consists of four records, which share the same time stamp and serial number. Records always start with the type= keyword. Each record consists of several name = value pairs separated by a white space or a comma. A detailed analysis of the above event follows: First Record type=SYSCALL The type field contains the type of the record. In this example, the SYSCALL value specifies that this record was triggered by a system call to the kernel. For a list of all possible type values and their explanations, see Audit Record Types . msg=audit(1364481363.243:24287): The msg field records: a time stamp and a unique ID of the record in the form audit( time_stamp : ID ) . Multiple records can share the same time stamp and ID if they were generated as part of the same Audit event. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. various event-specific name = value pairs provided by the kernel or user space applications. arch=c000003e The arch field contains information about the CPU architecture of the system. The value, c000003e , is encoded in hexadecimal notation. When searching Audit records with the ausearch command, use the -i or --interpret option to automatically convert hexadecimal values into their human-readable equivalents. The c000003e value is interpreted as x86_64 . syscall=2 The syscall field records the type of the system call that was sent to the kernel. The value, 2 , can be matched with its human-readable equivalent in the /usr/include/asm/unistd_64.h file. In this case, 2 is the open system call. Note that the ausyscall utility allows you to convert system call numbers to their human-readable equivalents. Use the ausyscall --dump command to display a listing of all system calls along with their numbers. For more information, see the ausyscall (8) man page. success=no The success field records whether the system call recorded in that particular event succeeded or failed. In this case, the call did not succeed. exit=-13 The exit field contains a value that specifies the exit code returned by the system call. This value varies for different system call. You can interpret the value to its human-readable equivalent with the following command: Note that the example assumes that your Audit log contains an event that failed with exit code -13 . a0=7fffd19c5592 , a1=0 , a2=7fffd19c5592 , a3=a The a0 to a3 fields record the first four arguments, encoded in hexadecimal notation, of the system call in this event. These arguments depend on the system call that is used; they can be interpreted by the ausearch utility. items=1 The items field contains the number of PATH auxiliary records that follow the syscall record. ppid=2686 The ppid field records the Parent Process ID (PPID). In this case, 2686 was the PPID of the parent process such as bash . pid=3538 The pid field records the Process ID (PID). In this case, 3538 was the PID of the cat process. auid=1000 The auid field records the Audit user ID, that is the loginuid. This ID is assigned to a user upon login and is inherited by every process even when the user's identity changes, for example, by switching user accounts with the su - john command. uid=1000 The uid field records the user ID of the user who started the analyzed process. The user ID can be interpreted into user names with the following command: ausearch -i --uid UID . gid=1000 The gid field records the group ID of the user who started the analyzed process. euid=1000 The euid field records the effective user ID of the user who started the analyzed process. suid=1000 The suid field records the set user ID of the user who started the analyzed process. fsuid=1000 The fsuid field records the file system user ID of the user who started the analyzed process. egid=1000 The egid field records the effective group ID of the user who started the analyzed process. sgid=1000 The sgid field records the set group ID of the user who started the analyzed process. fsgid=1000 The fsgid field records the file system group ID of the user who started the analyzed process. tty=pts0 The tty field records the terminal from which the analyzed process was invoked. ses=1 The ses field records the session ID of the session from which the analyzed process was invoked. comm="cat" The comm field records the command-line name of the command that was used to invoke the analyzed process. In this case, the cat command was used to trigger this Audit event. exe="/bin/cat" The exe field records the path to the executable that was used to invoke the analyzed process. subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 The subj field records the SELinux context with which the analyzed process was labeled at the time of execution. key="sshd_config" The key field records the administrator-defined string associated with the rule that generated this event in the Audit log. Second Record type=CWD In the second record, the type field value is CWD - current working directory. This type is used to record the working directory from which the process that invoked the system call specified in the first record was executed. The purpose of this record is to record the current process's location in case a relative path winds up being captured in the associated PATH record. This way the absolute path can be reconstructed. msg=audit(1364481363.243:24287) The msg field holds the same time stamp and ID value as the value in the first record. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. cwd="/home/ user_name " The cwd field contains the path to the directory in which the system call was invoked. Third Record type=PATH In the third record, the type field value is PATH . An Audit event contains a PATH -type record for every path that is passed to the system call as an argument. In this Audit event, only one path ( /etc/ssh/sshd_config ) was used as an argument. msg=audit(1364481363.243:24287): The msg field holds the same time stamp and ID value as the value in the first and second record. item=0 The item field indicates which item, of the total number of items referenced in the SYSCALL type record, the current record is. This number is zero-based; a value of 0 means it is the first item. name="/etc/ssh/sshd_config" The name field records the path of the file or directory that was passed to the system call as an argument. In this case, it was the /etc/ssh/sshd_config file. inode=409248 The inode field contains the inode number associated with the file or directory recorded in this event. The following command displays the file or directory that is associated with the 409248 inode number: dev=fd:00 The dev field specifies the minor and major ID of the device that contains the file or directory recorded in this event. In this case, the value represents the /dev/fd/0 device. mode=0100600 The mode field records the file or directory permissions, encoded in numerical notation as returned by the stat command in the st_mode field. See the stat(2) man page for more information. In this case, 0100600 can be interpreted as -rw------- , meaning that only the root user has read and write permissions to the /etc/ssh/sshd_config file. ouid=0 The ouid field records the object owner's user ID. ogid=0 The ogid field records the object owner's group ID. rdev=00:00 The rdev field contains a recorded device identifier for special files only. In this case, it is not used as the recorded file is a regular file. obj=system_u:object_r:etc_t:s0 The obj field records the SELinux context with which the recorded file or directory was labeled at the time of execution. objtype=NORMAL The objtype field records the intent of each path record's operation in the context of a given syscall. cap_fp=none The cap_fp field records data related to the setting of a permitted file system-based capability of the file or directory object. cap_fi=none The cap_fi field records data related to the setting of an inherited file system-based capability of the file or directory object. cap_fe=0 The cap_fe field records the setting of the effective bit of the file system-based capability of the file or directory object. cap_fver=0 The cap_fver field records the version of the file system-based capability of the file or directory object. Fourth Record type=PROCTITLE The type field contains the type of the record. In this example, the PROCTITLE value specifies that this record gives the full command-line that triggered this Audit event, triggered by a system call to the kernel. proctitle=636174002F6574632F7373682F737368645F636F6E666967 The proctitle field records the full command-line of the command that was used to invoke the analyzed process. The field is encoded in hexadecimal notation to not allow the user to influence the Audit log parser. The text decodes to the command that triggered this Audit event. When searching Audit records with the ausearch command, use the -i or --interpret option to automatically convert hexadecimal values into their human-readable equivalents. The 636174002F6574632F7373682F737368645F636F6E666967 value is interpreted as cat /etc/ssh/sshd_config . The Audit event analyzed above contains only a subset of all possible fields that an event can contain. For a list of all event fields and their explanation, see Audit Event Fields . For a list of all event types and their explanation, see Audit Record Types . Example 7.6. Additional audit.log Events The following Audit event records a successful start of the auditd daemon. The ver field shows the version of the Audit daemon that was started. The following Audit event records a failed attempt of user with UID of 1000 to log in as the root user.
[ "-w /etc/ssh/sshd_config -p warx -k sshd_config", "~]USD cat /etc/ssh/sshd_config", "type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm=\"cat\" exe=\"/bin/cat\" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=\"sshd_config\" type=CWD msg=audit(1364481363.243:24287): cwd=\"/home/shadowman\" type=PATH msg=audit(1364481363.243:24287): item=0 name=\"/etc/ssh/sshd_config\" inode=409248 dev=fd:00 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 objtype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 type=PROCTITLE msg=audit(1364481363.243:24287) : proctitle=636174002F6574632F7373682F737368645F636F6E666967", "~]# ausearch --interpret --exit -13", "~]# find / -inum 409248 -print /etc/ssh/sshd_config", "type=DAEMON_START msg=audit(1363713609.192:5426): auditd start, ver=2.2 format=raw kernel=2.6.32-358.2.1.el6.x86_64 auid=1000 pid=4979 subj=unconfined_u:system_r:auditd_t:s0 res=success", "type=USER_AUTH msg=audit(1364475353.159:24270): user pid=3280 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct=\"root\" exe=\"/bin/su\" hostname=? addr=? terminal=pts/0 res=failed'" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-understanding_audit_log_files
Chapter 6. Desktop
Chapter 6. Desktop GNOME rebase to version 3.14 The GNOME Desktop has been upgraded to upstream version 3.14 (with some minor additions from 3.16), which includes new features and a number of enhancements. Namely: Red Hat Enterprise Linux 7.2 adds GNOME Software , a new way to install and manage software on the user's system based on a yum backend. GNOME PackageKit remains to be the default updater for GNOME (also installed by default). With GNOME Software , the user manages an integrated place for software related tasks, such as browsing, installing and removing applications, and viewing and installing software updates. On the Top Bar, the newly-named System Status Menu groups together all of the indicators and applets otherwise accessed individually - brightness slider, improved airplane mode, connecting to Wi-Fi networks, Bluetooth, Volume, and so on - into one coherent and compact menu. Regarding Wi-Fi, GNOME 3.14 provides improved support for Wi-Fi hotspots. When connecting to a Wi-Fi portal that requires authentication, GNOME now automatically shows the login page as a part of the connection process. The default key combination for locking the screen has been changed. The default shortcut Ctrl+Alt+L has been replaced by the Super key+L key combination. The new design of the gedit text editor incorporates all of features into a more compact interface, which gives more space for work. Use of popovers for selecting the document format and tab width is more efficient compared to the use of dialogs and menus. Consolidated sidebar controls also give more space for content while retaining the original functionality. Other notable improvements include new shortcuts for opening the last closed tab with Ctrl+Shift+T and for changing case. Nautilus , the GNOME file manager, now uses the Shift+Ctrl+Z key combination, not Ctrl+Y , for the redo operation. Also, a header bar, instead of a toolbar, is now used. GNOME 3.14 includes a reimagined Videos application. Modern in style, the new version allows the user to browse videos on the computer as well as online video channels. Videos also includes a redesigned playback view. This provides a more streamlined experience than the earlier version: floating playback controls hide when the user does not need them, and the fullscreen playback view also has a new more refined look. Evince features improved accessibility for reading PDF files. The new version of the document viewer uses a header bar to give more space to your documents. When it is launched without a document being specified, Evince also shows a useful overview of your recent documents. The latest Evince version also includes High Resolution Display Support and enhanced accessibility, with links, images and form fields all being available from assistive technologies. The new version of GNOME Weather application makes use of GNOME's new geolocation framework to automatically show the weather for your current location, and a new layout provides an effective way to read weather forecasts. This release also brings improved support for comments in LibreOffice - import and export of nested comments in the ODF, DOC, DOCX and RTF filters, printing comments in margins, and formatting all comments. The GNOME application for virtual and remote machines, Boxes , introduces snapshots. Boxes now provide automatic downloading, running multiple boxes in separate windows, and user interface improvements, including improved fullscreen behavior and thumbnails. The GNOME Help documentation viewer has been redesigned to be consistent with other GNOME 3 applications. Help now uses a header bar, has an integrated search function, and bookmarking interface. GTK+ 3.14 includes a number of bug fixes and enhancements, such as automatic loading of menus from resources, multi-selection support in GtkListBox , property bindings in GtkBuilder files, support for drawing outside a widget's allocation (gtk_widget_set_clip()), new transition types in GtkStack, and file loading and saving with GtkSourceView . In addition, GTK+ now provides support for gesture interaction. With 3.14, the majority of common multitouch gestures are available for use in GTK+ applications, such as tap, drag, swipe, pinch, and rotate. Gestures can be added to existing GTK+ applications using GtkGesture . A GNOME Shell Extension, Looking Glass Inspector , has obtained a number of features for developers: showing all methods, classes, and so on in a namespace upon inspection, object inspector history expansion, or copying Looking Glass results as strings, and passing through events to gnome-shell. The High Resolution Display Support feature has been extended to include all the key aspects of the core GNOME 3 experience, including the Activities Overview, animations in the Activities Overview along with new window animations, Top Bar, lock screen and system dialogs. As far as GNOME Extensions are concerned, this release introduces support for alternative dock positioning, including the bottom side of the screen, in Simple Dock , a dock for the Gnome Shell. The ibus-gtk2 package now updates the immodules.cache file Previously, the update-gtk-immodules script searched for a no longer existing /etc/gtk-2.0/USDhost directory. Consequently, the post-installation script of the ibus-gtk2 package failed and exited without creating or updating the cache. The post-installation script has been changed to replace update-gtk-immodules with gtk-query-immodules-2.0-BITS , and the problem no longer occurs.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/desktop
Chapter 1. {sandboxed-containers-first} documentation has moved
Chapter 1. {sandboxed-containers-first} documentation has moved OpenShift sandboxed containers documentation has moved to a new location .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/sandboxed_containers_support_for_openshift/sandboxed-containers-moved
Chapter 6. Managing alerts
Chapter 6. Managing alerts 6.1. Managing alerts as an Administrator In OpenShift Container Platform, the Alerting UI enables you to manage alerts, silences, and alerting rules. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. 6.1.1. Accessing the Alerting UI from the Administrator perspective The Alerting UI is accessible through the Administrator perspective of the OpenShift Container Platform web console. From the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. Additional resources Searching and filtering alerts, silences, and alerting rules 6.1.2. Getting information about alerts, silences, and alerting rules from the Administrator perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts: From the Administrator perspective of the OpenShift Container Platform web console, go to the Observe Alerting Alerts page. Optional: Search for alerts by name by using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert: A description of the alert Messages associated with the alert Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences: From the Administrator perspective of the OpenShift Container Platform web console, go to the Observe Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing alerts , State , and Creator column headers. Select the name of a silence to view its Silence details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules: From the Administrator perspective of the OpenShift Container Platform web console, go to the Observe Alerting Alerting rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert state , and Source column headers. Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description. The expression that defines the condition for firing the alert. The time for which the condition should be true for an alert to fire. A graph for each alert governed by the alerting rule, showing the value with which the alert is firing. A table of all alerts governed by the alerting rule. Additional resources Cluster Monitoring Operator runbooks (Cluster Monitoring Operator GitHub repository) 6.1.3. Managing silences You can create a silence for an alert in the OpenShift Container Platform web console in the Administrator perspective. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Managing silences Configuring persistent storage 6.1.3.1. Silencing alerts from the Administrator perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To silence a specific alert: From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Alerts . For the alert that you want to silence, click and select Silence alert to open the Silence alert page with a default configuration for the chosen alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Silences . Click Create silence . On the Create silence page, set the schedule, duration, and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 6.1.3.2. Editing silences from the Administrator perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Silences . For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 6.1.3.3. Expiring silences from the Administrator perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure Go to Observe Alerting Silences . For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 6.1.4. Managing alerting rules for core platform monitoring The OpenShift Container Platform monitoring includes a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways: Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. For example, you can change the severity label for an alert from warning to critical to help you route and triage issues flagged by an alert. Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the openshift-monitoring project. Additional resources Managing alerting rules for core platform monitoring Tips for optimizing alerting rules for core platform monitoring 6.1.4.1. Creating new alerting rules As a cluster administrator, you can create new alerting rules based on platform metrics. These alerting rules trigger alerts based on the values of chosen metrics. Note If you create a customized AlertingRule resource based on an existing platform alerting rule, silence the original alert to avoid receiving conflicting alerts. To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have access to the cluster as a user that has the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML configuration file named example-alerting-rule.yaml . Add an AlertingRule resource to the YAML file. The following example creates a new alerting rule named example , similar to the default Watchdog alert: apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6 1 Ensure that the namespace is openshift-monitoring . 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The severity that alerting rule assigns to the alert. 6 The message associated with the alert. Important You must create the AlertingRule object in the openshift-monitoring namespace. Otherwise, the alerting rule is not accepted. Apply the configuration file to the cluster: USD oc apply -f example-alerting-rule.yaml 6.1.4.2. Modifying core platform alerting rules As a cluster administrator, you can modify core platform alerts before Alertmanager routes them to a receiver. For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML configuration file named example-modified-alerting-rule.yaml . Add an AlertRelabelConfig resource to the YAML file. The following example modifies the severity setting to critical for the default platform watchdog alerting rule: apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: "Watchdog;none" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6 1 Ensure that the namespace is openshift-monitoring . 2 The source labels for the values you want to modify. 3 The regular expression against which the value of sourceLabels is matched. 4 The target label of the value you want to modify. 5 The new value to replace the target label. 6 The relabel action that replaces the old value based on regex matching. The default action is Replace . Other possible values are Keep , Drop , HashMod , LabelMap , LabelDrop , and LabelKeep . Important You must create the AlertRelabelConfig object in the openshift-monitoring namespace. Otherwise, the alert label will not change. Apply the configuration file to the cluster: USD oc apply -f example-modified-alerting-rule.yaml Additional resources Monitoring stack architecture Alertmanager (Prometheus documentation) relabel_config configuration (Prometheus documentation) Alerting (Prometheus documentation) 6.1.5. Managing alerting rules for user-defined projects In OpenShift Container Platform, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Additional resources Creating alerting rules for user-defined projects Managing alerting rules for user-defined projects Optimizing alerting for user-defined projects 6.1.5.1. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note When you create an alerting rule, a project label is enforced on it even if a rule with the same name exists in another project. To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml Additional resources Monitoring stack architecture Alerting (Prometheus documentation) 6.1.5.2. Listing alerting rules for all projects in a single view As a cluster administrator, you can list alerting rules for core OpenShift Container Platform and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Alerting rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 6.1.5.3. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> Additional resources Alertmanager (Prometheus documentation) 6.2. Managing alerts as a Developer In OpenShift Container Platform, the Alerting UI enables you to manage alerts, silences, and alerting rules. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. 6.2.1. Accessing the Alerting UI from the Developer perspective The Alerting UI is accessible through the Developer perspective of the OpenShift Container Platform web console. From the Developer perspective, go to Observe and go to the Alerts tab. Select the project that you want to manage alerts for from the Project: list. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts tab. The results shown in the Alerts tab are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you are not logged in as a cluster administrator. Additional resources Searching and filtering alerts, silences, and alerting rules 6.2.2. Getting information about alerts, silences, and alerting rules from the Developer perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts, silences, and alerting rules: From the Developer perspective of the OpenShift Container Platform web console, go to the Observe <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert details can be viewed by clicking a greater than symbol ( > ) to an alert name and then selecting the alert from the list. Silence details can be viewed by clicking a silence in the Silenced by section of the Alert details page. The Silence details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting rule details can be viewed by clicking the menu to an alert in the Alerts page and then clicking View Alerting Rule . Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. Additional resources Cluster Monitoring Operator runbooks (Cluster Monitoring Operator GitHub repository) 6.2.3. Managing silences You can create a silence for an alert in the OpenShift Container Platform web console in the Developer perspective. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Managing silences Configuring persistent storage 6.2.3.1. Silencing alerts from the Developer perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure To silence a specific alert: From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Alerts tab. Select the project that you want to silence an alert for from the Project: list. If necessary, expand the details for the alert by clicking a greater than symbol ( > ) to the alert name. Click the alert message in the expanded view to open the Alert details page for the alert. Click Silence alert to open the Silence alert page with a default configuration for the alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Silences tab. Select the project that you want to silence alerts for from the Project: list. Click Create silence . On the Create silence page, set the duration and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 6.2.3.2. Editing silences from the Developer perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Silences tab. Select the project that you want to edit silences for from the Project: list. For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 6.2.3.3. Expiring silences from the Developer perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Silences tab. Select the project that you want to expire a silence for from the Project: list. For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 6.2.4. Managing alerting rules for user-defined projects In OpenShift Container Platform, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Additional resources Creating alerting rules for user-defined projects Managing alerting rules for user-defined projects Optimizing alerting for user-defined projects 6.2.4.1. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note When you create an alerting rule, a project label is enforced on it even if a rule with the same name exists in another project. To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml Additional resources Monitoring stack architecture Alerting (Prometheus documentation) 6.2.4.2. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view cluster role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view cluster role for your project. You have installed the OpenShift CLI ( oc ). Procedure To list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 6.2.4.3. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> Additional resources Alertmanager (Prometheus documentation)
[ "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6", "oc apply -f example-alerting-rule.yaml", "apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: \"Watchdog;none\" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6", "oc apply -f example-modified-alerting-rule.yaml", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n <namespace> delete prometheusrule <foo>", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n <project> get prometheusrule", "oc -n <project> get prometheusrule <rule> -o yaml", "oc -n <namespace> delete prometheusrule <foo>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/monitoring/managing-alerts
Schedule and quota APIs
Schedule and quota APIs OpenShift Container Platform 4.12 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/schedule_and_quota_apis/index
Autoscale APIs
Autoscale APIs OpenShift Container Platform 4.17 Reference guide for autoscale APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/autoscale_apis/index
Chapter 27. Creating guided decision tables
Chapter 27. Creating guided decision tables You can use guided decision tables to define rule attributes, metadata, conditions, and actions in a tabular format that can be added to your business rules project. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Decision Table . Enter an informative Guided Decision Table name and select the appropriate Package . The package that you specify must be the same package where the required data objects have been assigned or will be assigned. Select Use Wizard to finish setting up the table in the wizard, or leave this option unselected to finish creating the table and specify remaining configurations in the guided decision tables designer. Select the hit policy that you want your rows of rules in the table to conform to. For details, see Chapter 28, Hit policies for guided decision tables . Specify whether you want the Extended entry or Limited entry table. For details, see Section 28.1.1, "Types of guided decision tables" . Click Ok to complete the setup. If you have selected Use Wizard , the Guided Decision Table wizard is displayed. If you did not select the Use Wizard option, this prompt does not appear and you are taken directly to the table designer. For example, the following wizard setup is for a guided decision table in a loan application decision service: Figure 27.1. Create guided decision table If you are using the wizard, add any available imports, fact patterns, constraints, and actions, and select whether table columns should expand. Click Finish to close the wizard and view the table designer. Figure 27.2. Guided Decision Table wizard In the guided decision tables designer, you can add or edit columns and rows, and make other final adjustments.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-decision-tables-create-proc
Chapter 1. Deploying and configuring OpenStack Key Manager (barbican)
Chapter 1. Deploying and configuring OpenStack Key Manager (barbican) OpenStack Key Manager (barbican) is the secrets manager for Red Hat OpenStack Platform. You can use the barbican API and command line to centrally manage the certificates, keys, and passwords used by OpenStack services. Barbican is not enabled by default in Red Hat OpenStack Platform. You can deploy barbican in an existing OpenStack deployment. Barbican currently supports the following use cases described in this guide: Symmetric encryption keys - used for Block Storage (cinder) volume encryption, ephemeral disk encryption, and Object Storage (swift) encryption, among others. Asymmetric keys and certificates - used for glance image signing and verification, among others. OpenStack Key Manager integrates with the Block Storage (cinder), Networking (neutron), and Compute (nova) components. 1.1. OpenStack Key Manager workflow The following diagram shows the workflow that OpenStack Key Manager uses to manage secrets for your environment. 1.2. OpenStack Key Manager encryption types Secrets such as certificates, API keys, and passwords, can be stored in an encrypted blob in the barbican database or directly in a secure storage system. You can use a simple crypto plugin or PKCS#11 crypto plugin to encrypt secrets. To store the secrets as an encrypted blob in the barbican database, the following options are available: Simple crypto plugin - The simple crypto plugin is enabled by default and uses a single symmetric key to encrypt all secret payloads. This key is stored in plain text in the barbican.conf file, so it is important to prevent unauthorized access to this file. PKCS#11 crypto plugin - The PKCS#11 crypto plugin encrypts secrets with project-specific key encryption keys (pKEK), which are stored in the barbican database. These project-specific pKEKs are encrypted by a main key-encryption-key (MKEK), which is stored in a hardware security module (HSM). All encryption and decryption operations take place in the HSM, rather than in-process memory. The PKCS#11 plugin communicates with the HSM through the PKCS#11 API. Because the encryption is done in secure hardware, and a different pKEK is used per project, this option is more secure than the simple crypto plugin. Red Hat supports the PKCS#11 back end with any of the following HSMs. Device Supported in release High Availability (HA) support ATOS Trustway Proteccio NetHSM 16.0+ 16.1+ Entrust nShield Connect HSM 16.0+ Not supported Thales Luna Network HSM 16.1+ (Technology Preview) 16.1+ (Technology Preview) Note Regarding high availability (HA) options: The barbican service runs within Apache and is configured by director to use HAProxy for high availability. HA options for the back end layer will depend on the back end being used. For example, for simple crypto, all the barbican instances have the same encryption key in the config file, resulting in a simple HA configuration. 1.2.1. Configuring multiple encryption mechanisms You can configure a single instance of Barbican to use more than one back end. When this is done, you must specify a back end as the global default back end. You can also specify a default back end per project. If no mapping exists for a project, the secrets for that project are stored using the global default back end. For example, you can configure Barbican to use both the Simple crypto and PKCS#11 plugins. If you set Simple crypto as the global default, then all projects use that back end. You can then specify which projects use the PKCS#11 back end by setting PKCS#11 as the preferred back end for that project. If you decide to migrate to a new back end, you can keep the original available while enabling the new back end as the global default or as a project-specific back end. As a result, the old secrets remain available through the old back end, and new secrets are stored in the new global default back end. 1.3. Deploying Key Manager To deploy OpenStack Key Manager, first create an environment file for the barbican service and redeploy the overcloud with additional environment files. You then add users to the creator role to create and edit barbican secrets or to create encrypted volumes that store their secret in barbican. Note This procedure configures barbican to use the simple_crypto back end. Additional back ends are available, such as PKCS#11 which requires a different configuration, and different heat template files depending on which HSM is used. Other back ends such as KMIP, Hashicorp Vault and DogTag are not supported. Prerequisites Overcloud is deployed and running Procedure On the undercloud node, create an environment file for barbican. The BarbicanSimpleCryptoGlobalDefault sets this plugin as the global default plugin. You can also add the following options to the environment file: BarbicanPassword - Sets a password for the barbican service account. BarbicanWorkers - Sets the number of workers for barbican::wsgi::apache . Uses '%{::processorcount}' by default. BarbicanDebug - Enables debugging. BarbicanPolicies - Defines policies to configure for barbican. Uses a hash value, for example: { barbican-context_is_admin: { key: context_is_admin, value: 'role:admin' } } . This entry is then added to /etc/barbican/policy.json . Policies are described in detail in a later section. BarbicanSimpleCryptoKek - The Key Encryption Key (KEK) is generated by director, if none is specified. Add the following files to the openstack overcloud deploy command, without removing previously added role, template or environment files from the script: /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml /home/stack/templates/configure-barbican.yaml Re-run the deployment script to apply changes to your deployment: Retrieve the id of the creator role: Note You will not see the creator role unless OpenStack Key Manager (barbican) is installed. Assign a user to the creator role and specify the relevant project. In this example, a user named user1 in the project_a project is added to the creator role: Verification Create a test secret. For example: Retrieve the payload for the secret you just created: 1.4. Viewing Key Manager policies Barbican uses policies to determine which users are allowed to perform actions against the secrets, such as adding or deleting keys. To implement these controls, keystone project roles such as creator you created earlier, are mapped to barbican internal permissions. As a result, users assigned to those project roles receive the corresponding barbican permissions. The default policy is defined in code and typically does not require any amendments. If policy changes have not been made, you can view the default policy using the existing container in your environment. If changes have been made to the default policy, and you would like to see the defaults, use a separate system to pull the openstack-barbican-api container first. Prerequisites OpenStack Key Manager is deployed and running Procedure Use your Red Hat credentials to log in to podman: Pull the openstack-barbican-api container: Generate the policy file in the current working directory: Verification Review the barbican-policy.yaml file to check the policies used by barbican. The policy is implemented by four different roles that define how a user interacts with secrets and secret metadata. A user receives these permissions by being assigned to a particular role: admin The admin role can read, create, edit and delete secrets across all projects. creator The creator role can read, create, edit, and delete secrets that are in the project for which the creator is scoped. observer The observer role can only read secrets. audit The audit role can only read metadata. The audit role can not read secrets. For example, the following entries list the admin , observer , and creator keystone roles for each project. On the right, notice that they are assigned the role:admin , role:observer , and role:creator permissions: These roles can also be grouped together by barbican. For example, rules that specify admin_or_creator can apply to members of either rule:admin or rule:creator . Further down in the file, there are secret:put and secret:delete actions. To their right, notice which roles have permissions to execute these actions. In the following example, secret:delete means that only admin and creator role members can delete secret entries. In addition, the rule states that users in the admin or creator role for that project can delete a secret in that project. The project match is defined by the secret_project_match rule, which is also defined in the policy.
[ "cat /home/stack/templates/configure-barbican.yaml parameter_defaults: BarbicanSimpleCryptoGlobalDefault: true", "openstack overcloud deploy --timeout 100 --templates /usr/share/openstack-tripleo-heat-templates --stack overcloud --libvirt-type kvm --ntp-server clock.redhat.com -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/templates/config_lvm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/templates/network/network-environment.yaml -e /home/stack/templates/hostnames.yml -e /home/stack/templates/nodes_data.yaml -e /home/stack/templates/extra_templates.yaml -e /home/stack/container-parameters-with-barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml -e /home/stack/templates/configure-barbican.yaml --log-file overcloud_deployment_38.log", "openstack role show creator +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 4e9c560c6f104608948450fbf316f9d7 | | name | creator | +-----------+----------------------------------+", "openstack role add --user user1 --project project_a 4e9c560c6f104608948450fbf316f9d7", "openstack secret store --name testSecret --payload 'TestPayload' +---------------+------------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------------+ | Secret href | https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 | | Name | testSecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+------------------------------------------------------------------------------------+", "openstack secret get https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 --payload +---------+-------------+ | Field | Value | +---------+-------------+ | Payload | TestPayload | +---------+-------------+", "login username: ******** password: ********", "pull registry.redhat.io/rhosp-rhel8/openstack-barbican-api:17.1", "run -it registry.redhat.io/rhosp-rhel8/openstack-barbican-api:17.1 oslopolicy-policy-generator --namespace barbican > barbican-policy.yaml", "# #\"admin\": \"role:admin\" # #\"observer\": \"role:observer\" # #\"creator\": \"role:creator\"", "secret:delete\": \"rule:admin_or_creator and rule:secret_project_match\"" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_secrets_with_the_key_manager_service/assembly-deploying-configuring-key-manager_rhosp
Chapter 6. Managing the Database Cache Settings
Chapter 6. Managing the Database Cache Settings Directory Server uses the following caches: The Entry cache , which contains individual directory entries. The DN cache is used to associate DNs and RDNs with entries. The Database cache , which contains the database index files *.db and *.db4 files. For the highest performance improvements, all cache sizes must be able to store all of their records. If you do not use the recommended auto-sizing feature and have not enough RAM available, assign free memory to the caches in the previously shown order. 6.1. The Database and Entry Cache Auto-Sizing Feature By default, Directory Server automatically determine the optimized size for the database and entry cache. Auto-sizing optimizes the size of both caches based on the hardware resources of the server when the instance starts. Important Red Hat recommends to use the auto-tuning settings. Do not set the entry cache size manually. 6.1.1. Manually Re-enabling the Database and Entry Cache Auto-sizing If you upgraded the instance from a version prior to 10.1.1, or previously manually set an entry cache size, you can enable the auto-tuning for the entry cache. The following parameters in the cn=config,cn=ldbm database,cn=plugins,cn=config entry control the auto-sizing: nsslapd-cache-autosize This settings controls if auto-sizing is enabled for the database and entry cache. Auto-sizing is enabled: For both the database and entry cache, if the nsslapd-cache-autosize parameter is set to a value greater than 0 . For the database cache, if the nsslapd-cache-autosize and nsslapd-dbcachesize parameters are set to 0 . For the entry cache, if the nsslapd-cache-autosize and nsslapd-cachememsize parameters are set to 0 . nsslapd-cache-autosize-split The value sets the percentage of RAM that is used for the database cache. The remaining percentage is used for the entry cache. Using more than 1.5 GB RAM for the database cache does not improve the performance. Therefore, Directory Server limits the database cache 1.5 GB. To enable the database and entry cache auto-sizing: Stop the Directory Server instance: Backup the /etc/dirsrv/slapd- instance_name /dse.ldif file: Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file: Set the percentage of free system RAM to use for the database and entry cache. For example, to set 10%: Note If you set the nsslapd-cache-autosize parameter to 0 , you must additionally set: the nsslapd-dbcachesize in the cn=config,cn=ldbm database,cn=plugins,cn=config entry to 0 to enable the auto-sized database cache. the nsslapd-cachememsize in the cn= database_name ,cn=ldbm database,cn=plugins,cn=config entry to 0 to enable the auto-sized entry cache for a database. Optionally, set the percentage used from the free system RAM for the database cache. For example, to set 40%: Directory Server uses the remaining 60% of free memory for the entry cache. Save the changes. Start the Directory Server instance: Example 6.1. The nsslapd-cache-autosize and nsslapd-cache-autosize-split Parameter The following settings are the default values for the parameters: Using these settings, 25% of the system's free RAM is used ( nsslapd-cache-autosize ). From this memory, 25% are used for the database cache ( nsslapd-cache-autosize-split ) and the remaining 75% for the entry cache. Depending on the free RAM, this results in the following cache sizes: GB of Free RAM Database Cache Size Entry Cache Size 1 GB 64 MB 192 MB 2 GB 128 MB 384 MB 4 GB 256 MB 768 MB 8 GB 512 MB 1,536 MB 16 GB 1,024 MB 3,072 MB 32 GB 1,536 MB 6,656 MB 64 GB 1,536 MB 14,848 MB 128 GB 1,536 MB 31,232 MB
[ "systemctl stop dirsrv@ instance_name", "cp /etc/dirsrv/slapd- instance_name /dse.ldif /etc/dirsrv/slapd- instance_name /dse.ldif.bak.USD(date \"+%F_%H-%M-%S\")", "nsslapd-cache-autosize: 10", "nsslapd-cache-autosize-split: 40", "systemctl start dirsrv@ instance_name", "nsslapd-cache-autosize: 25 nsslapd-cache-autosize-split: 25 nsslapd-dbcachesize: 1536MB" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/memoryusage
Chapter 1. Overview
Chapter 1. Overview Red Hat Gluster Storage Web Administration provides visual monitoring and metrics infrastructure for Red Hat Gluster Storage 3.5 and is the primary method to monitor your Red Hat Gluster Storage environment. The Red Hat Gluster Storage Web Administration is based on the Tendrl upstream project and utilizes Ansible automation for installation. The key goal of Red Hat Gluster Storage Web Administration is to provide deep metrics and visualization of Red Hat Storage Gluster clusters and the associated physical storage elements such as storage nodes, volumes, and bricks. Key Features Monitoring dashboards for Clusters, Hosts, Volumes, and Bricks Top-level list views of Clusters, Hosts, and Volumes SNMPv3 Configuration and alerting User Management Importing Gluster cluster
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/monitoring_guide/overview
Chapter 3. Customizing TuneD profiles
Chapter 3. Customizing TuneD profiles You can create or modify TuneD profiles to optimize system performance for your intended use case. Prerequisites Install and enable TuneD as described in Installing and Enabling TuneD for details. 3.1. TuneD profiles A detailed analysis of a system can be very time-consuming. TuneD provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles. The profiles provided with TuneD are divided into the following categories: Power-saving profiles Performance-boosting profiles The performance-boosting profiles include profiles that focus on the following aspects: Low latency for storage and network High throughput for storage and network Virtual machine performance Virtualization host performance Syntax of profile configuration The tuned.conf file can contain one [main] section and other sections for configuring plug-in instances. However, all sections are optional. Lines starting with the hash sign ( # ) are comments. Additional resources tuned.conf(5) man page on your system 3.2. The default TuneD profile During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules: Environment Default profile Goal Compute nodes throughput-performance The best throughput performance Virtual machines virtual-guest The best performance. If you are not interested in the best performance, you can change it to the balanced or powersave profile. Other cases balanced Balanced performance and power consumption Additional resources tuned.conf(5) man page on your system 3.3. Merged TuneD profiles As an experimental feature, it is possible to select more profiles at once. TuneD will try to merge them during the load. If there are conflicts, the settings from the last specified profile takes precedence. Example 3.1. Low power consumption in a virtual guest The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority: Warning Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the throughput-performance profile and concurrently setting the disk spindown to the low value by the spindown-disk profile. Additional resources tuned-adm and tuned.conf(5) man pages on your system 3.4. The location of TuneD profiles TuneD stores profiles in the following directories: /usr/lib/tuned/ Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called tuned.conf , and optionally other files, for example helper scripts. /etc/tuned/ If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in /etc/tuned/ is used. Additional resources tuned.conf(5) man page on your system 3.5. Inheritance between TuneD profiles TuneD profiles can be based on other profiles and modify only certain aspects of their parent profile. The [main] section of TuneD profiles recognizes the include option: All settings from the parent profile are loaded in this child profile. In the following sections, the child profile can override certain settings inherited from the parent profile or add new settings not present in the parent profile. You can create your own child profile in the /etc/tuned/ directory based on a pre-installed profile in /usr/lib/tuned/ with only some parameters adjusted. If the parent profile is updated, such as after a TuneD upgrade, the changes are reflected in the child profile. Example 3.2. A power-saving profile based on balanced The following is an example of a custom profile that extends the balanced profile and sets Aggressive Link Power Management (ALPM) for all devices to the maximum powersaving. Additional resources tuned.conf(5) man page on your system 3.6. Static and dynamic tuning in TuneD Understanding the difference between the two categories of system tuning that TuneD applies, static and dynamic , is important when determining which one to use for a given situation or purpose. Static tuning Mainly consists of the application of predefined sysctl and sysfs settings and one-shot activation of several configuration tools such as ethtool . Dynamic tuning Watches how various system components are used throughout the uptime of your system. TuneD adjusts system settings dynamically based on that monitoring information. For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. TuneD monitors the activity of these components and reacts to the changes in their use. By default, dynamic tuning is disabled. To enable it, edit the /etc/tuned/tuned-main.conf file and change the dynamic_tuning option to 1 . TuneD then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use the update_interval option. Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the TuneD profiles. Example 3.3. Static and dynamic tuning on a workstation On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded. For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. TuneD has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage. If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, TuneD detects this and sets the interface speed to maximum to offer the best performance while the activity level is high. This principle is used for other plug-ins for CPU and disks as well. 3.7. TuneD plug-ins Plug-ins are modules in TuneD profiles that TuneD uses to monitor or optimize different devices on the system. TuneD uses two types of plug-ins: Monitoring plug-ins Monitoring plug-ins are used to get information from a running system. The output of the monitoring plug-ins can be used by tuning plug-ins for dynamic tuning. Monitoring plug-ins are automatically instantiated whenever their metrics are needed by any of the enabled tuning plug-ins. If two tuning plug-ins require the same data, only one instance of the monitoring plug-in is created and the data is shared. Tuning plug-ins Each tuning plug-in tunes an individual subsystem and takes several parameters that are populated from the TuneD profiles. Each subsystem can have multiple devices, such as multiple CPUs or network cards, that are handled by individual instances of the tuning plug-ins. Specific settings for individual devices are also supported. Syntax for plug-ins in TuneD profiles Sections describing plug-in instances are formatted in the following way: NAME is the name of the plug-in instance as it is used in the logs. It can be an arbitrary string. TYPE is the type of the tuning plug-in. DEVICES is the list of devices that this plug-in instance handles. The devices line can contain a list, a wildcard ( * ), and negation ( ! ). If there is no devices line, all devices present or later attached on the system of the TYPE are handled by the plug-in instance. This is same as using the devices=* option. Example 3.4. Matching block devices with a plug-in The following example matches all block devices starting with sd , such as sda or sdb , and does not disable barriers on them: The following example matches all block devices except sda1 and sda2 : If no instance of a plug-in is specified, the plug-in is not enabled. If the plug-in supports more options, they can be also specified in the plug-in section. If the option is not specified and it was not previously specified in the included plug-in, the default value is used. Short plug-in syntax If you do not need custom names for the plug-in instance and there is only one definition of the instance in your configuration file, TuneD supports the following short syntax: In this case, it is possible to omit the type line. The instance is then referred to with a name, same as the type. The example could be then rewritten into: Example 3.5. Matching block devices using the short syntax Conflicting plug-in definitions in a profile If the same section is specified more than once using the include option, the settings are merged. If they cannot be merged due to a conflict, the last conflicting definition overrides the settings. If you do not know what was previously defined, you can use the replace Boolean option and set it to true . This causes all the definitions with the same name to be overwritten and the merge does not happen. You can also disable the plug-in by specifying the enabled=false option. This has the same effect as if the instance was never defined. Disabling the plug-in is useful if you are redefining the definition from the include option and do not want the plug-in to be active in your custom profile. NOTE TuneD includes the ability to run any shell command as part of enabling or disabling a tuning profile. This enables you to extend TuneD profiles with functionality that has not been integrated into TuneD yet. You can specify arbitrary shell commands using the script plug-in. Additional resources tuned.conf(5) man page on your system 3.8. Available TuneD plug-ins Monitoring plug-ins Currently, the following monitoring plug-ins are implemented: disk Gets disk load (number of IO operations) per device and measurement interval. net Gets network load (number of transferred packets) per network card and measurement interval. load Gets CPU load per CPU and measurement interval. Tuning plug-ins Currently, the following tuning plug-ins are implemented. Only some of these plug-ins implement dynamic tuning. Options supported by plug-ins are also listed: cpu Sets the CPU governor to the value specified by the governor option and dynamically changes the Power Management Quality of Service (PM QoS) CPU Direct Memory Access (DMA) latency according to the CPU load. If the CPU load is lower than the value specified by the load_threshold option, the latency is set to the value specified by the latency_high option, otherwise it is set to the value specified by latency_low . You can also force the latency to a specific value and prevent it from dynamically changing further. To do so, set the force_latency option to the required latency value. eeepc_she Dynamically sets the front-side bus (FSB) speed according to the CPU load. This feature can be found on some netbooks and is also known as the ASUS Super Hybrid Engine (SHE). If the CPU load is lower or equal to the value specified by the load_threshold_powersave option, the plug-in sets the FSB speed to the value specified by the she_powersave option. If the CPU load is higher or equal to the value specified by the load_threshold_normal option, it sets the FSB speed to the value specified by the she_normal option. Static tuning is not supported and the plug-in is transparently disabled if TuneD does not detect the hardware support for this feature. net Configures the Wake-on-LAN functionality to the values specified by the wake_on_lan option. It uses the same syntax as the ethtool utility. It also dynamically changes the interface speed according to the interface utilization. sysctl Sets various sysctl settings specified by the plug-in options. The syntax is name = value , where name is the same as the name provided by the sysctl utility. Use the sysctl plug-in if you need to change system settings that are not covered by other plug-ins available in TuneD . If the settings are covered by some specific plug-ins, prefer these plug-ins. usb Sets autosuspend timeout of USB devices to the value specified by the autosuspend parameter. The value 0 means that autosuspend is disabled. vm Enables or disables transparent huge pages depending on the value of the transparent_hugepages option. Valid values of the transparent_hugepages option are: "always" "never" "madvise" audio Sets the autosuspend timeout for audio codecs to the value specified by the timeout option. Currently, the snd_hda_intel and snd_ac97_codec codecs are supported. The value 0 means that the autosuspend is disabled. You can also enforce the controller reset by setting the Boolean option reset_controller to true . disk Sets the disk elevator to the value specified by the elevator option. It also sets: APM to the value specified by the apm option Scheduler quantum to the value specified by the scheduler_quantum option Disk spindown timeout to the value specified by the spindown option Disk readahead to the value specified by the readahead parameter The current disk readahead to a value multiplied by the constant specified by the readahead_multiply option In addition, this plug-in dynamically changes the advanced power management and spindown timeout setting for the drive according to the current drive utilization. The dynamic tuning can be controlled by the Boolean option dynamic and is enabled by default. scsi_host Tunes options for SCSI hosts. It sets Aggressive Link Power Management (ALPM) to the value specified by the alpm option. mounts Enables or disables barriers for mounts according to the Boolean value of the disable_barriers option. script Executes an external script or binary when the profile is loaded or unloaded. You can choose an arbitrary executable. Important The script plug-in is provided mainly for compatibility with earlier releases. Prefer other TuneD plug-ins if they cover the required functionality. TuneD calls the executable with one of the following arguments: start when loading the profile stop when unloading the profile You need to correctly implement the stop action in your executable and revert all settings that you changed during the start action. Otherwise, the roll-back step after changing your TuneD profile will not work. Bash scripts can import the /usr/lib/tuned/functions Bash library and use the functions defined there. Use these functions only for functionality that is not natively provided by TuneD . If a function name starts with an underscore, such as _wifi_set_power_level , consider the function private and do not use it in your scripts, because it might change in the future. Specify the path to the executable using the script parameter in the plug-in configuration. Example 3.6. Running a Bash script from a profile To run a Bash script named script.sh that is located in the profile directory, use: sysfs Sets various sysfs settings specified by the plug-in options. The syntax is name = value , where name is the sysfs path to use. Use this plugin in case you need to change some settings that are not covered by other plug-ins. Prefer specific plug-ins if they cover the required settings. video Sets various powersave levels on video cards. Currently, only the Radeon cards are supported. The powersave level can be specified by using the radeon_powersave option. Supported values are: default auto low mid high dynpm dpm-battery dpm-balanced dpm-perfomance For details, see www.x.org . Note that this plug-in is experimental and the option might change in future releases. bootloader Adds options to the kernel command line. This plug-in supports only the GRUB boot loader. Customized non-standard location of the GRUB configuration file can be specified by the grub2_cfg_file option. The kernel options are added to the current GRUB configuration and its templates. The system needs to be rebooted for the kernel options to take effect. Switching to another profile or manually stopping the TuneD service removes the additional options. If you shut down or reboot the system, the kernel options persist in the grub.cfg file. The kernel options can be specified by the following syntax: Example 3.7. Modifying the kernel command line For example, to add the quiet kernel option to a TuneD profile, include the following lines in the tuned.conf file: The following is an example of a custom profile that adds the isolcpus=2 option to the kernel command line: service Handles various sysvinit , sysv-rc , openrc , and systemd services specified by the plug-in options. The syntax is service. service_name = command [,file: file ] . Supported service-handling commands are: start stop enable disable Separate multiple commands using either a comma ( , ) or a semicolon ( ; ). If the directives conflict, the service plugin uses the last listed one. Use the optional file: file directive to install an overlay configuration file, file , for systemd only. Other init systems ignore this directive. The service plugin copies overlay configuration files to /etc/systemd/system/ service_name .service.d/ directories. Once profiles are unloaded, the service plugin removes these directories if they are empty. Note The service plugin only operates on the current runlevel with non- systemd init systems. Example 3.8. Starting and enabling the sendmail sendmail service with an overlay file The internal variable USD{i:PROFILE_DIR} points to the directory the plugin loads the profile from. scheduler Offers a variety of options for the tuning of scheduling priorities, CPU core isolation, and process, thread, and IRQ affinities. For specifics of the different options available, see Functionalities of the scheduler TuneD plug-in . 3.9. Functionalities of the scheduler TuneD plugin Use the scheduler TuneD plugin to control and tune scheduling priorities, CPU core isolation, and process, thread, and IRQ afinities. CPU isolation To prevent processes, threads, and IRQs from using certain CPUs, use the isolated_cores option. It changes process and thread affinities, IRQ affinities, and sets the default_smp_affinity parameter for IRQs. The CPU affinity mask is adjusted for all processes and threads matching the ps_whitelist option, subject to success of the sched_setaffinity() system call. The default setting of the ps_whitelist regular expression is .* to match all processes and thread names. To exclude certain processes and threads, use the ps_blacklist option. The value of this option is also interpreted as a regular expression. Process and thread names are matched against that expression. Profile rollback enables all matching processes and threads to run on all CPUs, and restores the IRQ settings prior to the profile application. Multiple regular expressions separated by ; for the ps_whitelist and ps_blacklist options are supported. Escaped semicolon \; is taken literally. Example 3.9. Isolate CPUs 2-4 The following configuration isolates CPUs 2-4. Processes and threads that match the ps_blacklist regular expression can use any CPUs regardless of the isolation: IRQ SMP affinity The /proc/irq/default_smp_affinity file contains a bitmask representing the default target CPU cores on a system for all inactive interrupt request (IRQ) sources. Once an IRQ is activated or allocated, the value in the /proc/irq/default_smp_affinity file determines the IRQ's affinity bitmask. The default_irq_smp_affinity parameter controls what TuneD writes to the /proc/irq/default_smp_affinity file. The default_irq_smp_affinity parameter supports the following values and behaviors: calc Calculates the content of the /proc/irq/default_smp_affinity file from the isolated_cores parameter. An inversion of the isolated_cores parameter calculates the non-isolated cores. The intersection of the non-isolated cores and the content of the /proc/irq/default_smp_affinity file is then written to the /proc/irq/default_smp_affinity file. This is the default behavior if the default_irq_smp_affinity parameter is omitted. ignore TuneD does not modify the /proc/irq/default_smp_affinity file. A CPU list Takes the form of a single number such as 1 , a comma separated list such as 1,3 , or a range such as 3-5 . Unpacks the CPU list and writes it directly to the /proc/irq/default_smp_affinity file. Example 3.10. Setting the default IRQ smp affinity using an explicit CPU list The following example uses an explicit CPU list to set the default IRQ SMP affinity to CPUs 0 and 2: Scheduling policy To adjust scheduling policy, priority and affinity for a group of processes or threads, use the following syntax: where rule_prio defines internal TuneD priority of the rule. Rules are sorted based on priority. This is needed for inheritance to be able to reorder previously defined rules. Equal rule_prio rules should be processed in the order they were defined. However, this is Python interpreter dependent. To disable an inherited rule for groupname , use: sched must be one of the following: f for first in, first out (FIFO) b for batch r for round robin o for other * for do not change affinity is CPU affinity in hexadecimal. Use * for no change. prio is scheduling priority (see chrt -m ). regex is Python regular expression. It is matched against the output of the ps -eo cmd command. Any given process name can match more than one group. In such cases, the last matching regex determines the priority and scheduling policy. Example 3.11. Setting scheduling policies and priorities The following example sets the scheduling policy and priorities to kernel threads and watchdog: The scheduler plugin uses a perf event loop to identify newly created processes. By default, it listens to perf.RECORD_COMM and perf.RECORD_EXIT events. Setting the perf_process_fork parameter to true tells the plug-in to also listen to perf.RECORD_FORK events, meaning that child processes created by the fork() system call are processed. Note Processing perf events can pose a significant CPU overhead. The CPU overhead of the scheduler plugin can be mitigated by using the scheduler runtime option and setting it to 0 . This completely disables the dynamic scheduler functionality and the perf events are not monitored and acted upon. The disadvantage of this is that the process and thread tuning will be done only at profile application. Example 3.12. Disabling the dynamic scheduler functionality The following example disables the dynamic scheduler functionality while also isolating CPUs 1 and 3: The mmapped buffer is used for perf events. Under heavy loads, this buffer might overflow and as a result the plugin might start missing events and not processing some newly created processes. In such cases, use the perf_mmap_pages parameter to increase the buffer size. The value of the perf_mmap_pages parameter must be a power of 2. If the perf_mmap_pages parameter is not manually set, a default value of 128 is used. Confinement using cgroups The scheduler plugin supports process and thread confinement using cgroups v1. The cgroup_mount_point option specifies the path to mount the cgroup file system, or, where TuneD expects it to be mounted. If it is unset, /sys/fs/cgroup/cpuset is expected. If the cgroup_groups_init option is set to 1 , TuneD creates and removes all cgroups defined with the cgroup* options. This is the default behavior. If the cgroup_mount_point option is set to 0 , the cgroups must be preset by other means. If the cgroup_mount_point_init option is set to 1 , TuneD creates and removes the cgroup mount point. It implies cgroup_groups_init = 1 . If the cgroup_mount_point_init option is set to 0 , you must preset the cgroups mount point by other means. This is the default behavior. The cgroup_for_isolated_cores option is the cgroup name for the isolated_cores option functionality. For example, if a system has 4 CPUs, isolated_cores=1 means that Tuned moves all processes and threads to CPUs 0, 2, and 3. The scheduler plug-in isolates the specified core by writing the calculated CPU affinity to the cpuset.cpus control file of the specified cgroup and moves all the matching processes and threads to this group. If this option is unset, classic cpuset affinity using sched_setaffinity() sets the CPU affinity. The cgroup. cgroup_name option defines affinities for arbitrary cgroups . You can even use hierarchic cgroups, but you must specify the hierarchy in the correct order. TuneD does not do any sanity checks here, with the exception that it forces the cgroup to be in the location specified by the cgroup_mount_point option. The syntax of the scheduler option starting with group. has been augmented to use cgroup. cgroup_name instead of the hexadecimal affinity . The matching processes are moved to the cgroup cgroup_name . You can also use cgroups not defined by the cgroup. option as described above. For example, cgroups not managed by TuneD . All cgroup names are sanitized by replacing all periods ( . ) with slashes ( / ). This prevents the plugin from writing outside the location specified by the cgroup_mount_point option. Example 3.13. Using cgroups v1 with the scheduler plug-in The following example creates 2 cgroups , group1 and group2 . It sets the cgroup group1 affinity to CPU 2 and the cgroup group2 to CPUs 0 and 2. Given a 4 CPU setup, the isolated_cores=1 option moves all processes and threads to CPU cores 0, 2, and 3. Processes and threads specified by the ps_blacklist regular expression are not moved. The cgroup_ps_blacklist option excludes processes belonging to the specified cgroups . The regular expression specified by this option is matched against cgroup hierarchies from /proc/ PID /cgroups . Commas ( , ) separate cgroups v1 hierarchies from /proc/ PID /cgroups before regular expression matching. The following is an example of content the regular expression is matched against: Multiple regular expressions can be separated by semicolons ( ; ). The semicolon represents a logical 'or' operator. Example 3.14. Excluding processes from the scheduler using cgroups In the following example, the scheduler plug-in moves all processes away from core 1, except for processes which belong to cgroup /daemons . The \b string is a regular expression metacharacter that matches a word boundary. In the following example, the scheduler plugin excludes all processes which belong to a cgroup with a hierarchy-ID of 8 and controller-list blkio . Recent kernels moved some sched_ and numa_balancing_ kernel run-time parameters from the /proc/sys/kernel directory managed by the sysctl utility, to debugfs , typically mounted under the /sys/kernel/debug directory. TuneD provides an abstraction mechanism for the following parameters via the scheduler plugin where, based on the kernel used, TuneD writes the specified value to the correct location: sched_min_granularity_ns sched_latency_ns , sched_wakeup_granularity_ns sched_tunable_scaling , sched_migration_cost_ns sched_nr_migrate numa_balancing_scan_delay_ms numa_balancing_scan_period_min_ms numa_balancing_scan_period_max_ms numa_balancing_scan_size_mb Example 3.15. Set tasks' "cache hot" value for migration decisions. On the old kernels, setting the following parameter meant that sysctl wrote a value of 500000 to the /proc/sys/kernel/sched_migration_cost_ns file: This is, on more recent kernels, equivalent to setting the following parameter via the scheduler plugin: Meaning TuneD writes a value of 500000 to the /sys/kernel/debug/sched/migration_cost_ns file. 3.10. Variables in TuneD profiles Variables expand at run time when a TuneD profile is activated. Using TuneD variables reduces the amount of necessary typing in TuneD profiles. There are no predefined variables in TuneD profiles. You can define your own variables by creating the [variables] section in a profile and using the following syntax: To expand the value of a variable in a profile, use the following syntax: Example 3.16. Isolating CPU cores using variables In the following example, the USD{isolated_cores} variable expands to 1,2 ; hence the kernel boots with the isolcpus=1,2 option: The variables can be specified in a separate file. For example, you can add the following lines to tuned.conf : If you add the isolated_cores=1,2 option to the /etc/tuned/my-variables.conf file, the kernel boots with the isolcpus=1,2 option. Additional resources tuned.conf(5) man page on your system 3.11. Built-in functions in TuneD profiles Built-in functions expand at run time when a TuneD profile is activated. You can: Use various built-in functions together with TuneD variables Create custom functions in Python and add them to TuneD in the form of plug-ins To call a function, use the following syntax: To expand the directory path where the profile and the tuned.conf file are located, use the PROFILE_DIR function, which requires special syntax: Example 3.17. Isolating CPU cores using variables and built-in functions In the following example, the USD{non_isolated_cores} variable expands to 0,3-5 , and the cpulist_invert built-in function is called with the 0,3-5 argument: The cpulist_invert function inverts the list of CPUs. For a 6-CPU machine, the inversion is 1,2 , and the kernel boots with the isolcpus=1,2 command-line option. Additional resources tuned.conf(5) man page on your system 3.12. Built-in functions available in TuneD profiles The following built-in functions are available in all TuneD profiles: PROFILE_DIR Returns the directory path where the profile and the tuned.conf file are located. exec Executes a process and returns its output. assertion Compares two arguments. If they do not match , the function logs text from the first argument and aborts profile loading. assertion_non_equal Compares two arguments. If they match , the function logs text from the first argument and aborts profile loading. kb2s Converts kilobytes to disk sectors. s2kb Converts disk sectors to kilobytes. strip Creates a string from all passed arguments and deletes both leading and trailing white space. virt_check Checks whether TuneD is running inside a virtual machine (VM) or on bare metal: Inside a VM, the function returns the first argument. On bare metal, the function returns the second argument, even in case of an error. cpulist_invert Inverts a list of CPUs to make its complement. For example, on a system with 4 CPUs, numbered from 0 to 3, the inversion of the list 0,2,3 is 1 . cpulist2hex Converts a CPU list to a hexadecimal CPU mask. cpulist2hex_invert Converts a CPU list to a hexadecimal CPU mask and inverts it. hex2cpulist Converts a hexadecimal CPU mask to a CPU list. cpulist_online Checks whether the CPUs from the list are online. Returns the list containing only online CPUs. cpulist_present Checks whether the CPUs from the list are present. Returns the list containing only present CPUs. cpulist_unpack Unpacks a CPU list in the form of 1-3,4 to 1,2,3,4 . cpulist_pack Packs a CPU list in the form of 1,2,3,5 to 1-3,5 . 3.13. Creating new TuneD profiles This procedure creates a new TuneD profile with custom performance rules. Prerequisites The TuneD service is running. See Installing and Enabling TuneD for details. Procedure In the /etc/tuned/ directory, create a new directory named the same as the profile that you want to create: In the new directory, create a file named tuned.conf . Add a [main] section and plug-in definitions in it, according to your requirements. For example, see the configuration of the balanced profile: To activate the profile, use: Verify that the TuneD profile is active and the system settings are applied: Additional resources tuned.conf(5) man page on your system 3.14. Modifying existing TuneD profiles This procedure creates a modified child profile based on an existing TuneD profile. Prerequisites The TuneD service is running. See Installing and Enabling TuneD for details. Procedure In the /etc/tuned/ directory, create a new directory named the same as the profile that you want to create: In the new directory, create a file named tuned.conf , and set the [main] section as follows: Replace parent-profile with the name of the profile you are modifying. Include your profile modifications. Example 3.18. Lowering swappiness in the throughput-performance profile To use the settings from the throughput-performance profile and change the value of vm.swappiness to 5, instead of the default 10, use: To activate the profile, use: Verify that the TuneD profile is active and the system settings are applied: Additional resources tuned.conf(5) man page on your system 3.15. Setting the disk scheduler using TuneD This procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots. In the following commands and configuration, replace: device with the name of the block device, for example sdf selected-scheduler with the disk scheduler that you want to set for the device, for example bfq Prerequisites The TuneD service is installed and enabled. For details, see Installing and enabling TuneD . Procedure Optional: Select an existing TuneD profile on which your profile will be based. For a list of available profiles, see TuneD profiles distributed with RHEL . To see which profile is currently active, use: Create a new directory to hold your TuneD profile: Find the system unique identifier of the selected block device: Note The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID . Create the /etc/tuned/ my-profile /tuned.conf configuration file. In the file, set the following options: Optional: Include an existing profile: Set the selected disk scheduler for the device that matches the WWN identifier: Here: Replace IDNAME with the name of the identifier being used (for example, ID_WWN ). Replace device system unique id with the value of the chosen identifier (for example, 0x5002538d00000000 ). To match multiple devices in the devices_udev_regex option, enclose the identifiers in parentheses and separate them with vertical bars: Enable your profile: Verification Verify that the TuneD profile is active and applied: Read the contents of the /sys/block/ device /queue/scheduler file: In the file name, replace device with the block device name, for example sdc . The active scheduler is listed in square brackets ( [] ). Additional resources Customizing TuneD profiles .
[ "tuned-adm profile virtual-guest powersave", "[main] include= parent", "[main] include=balanced [scsi_host] alpm=min_power", "[ NAME ] type= TYPE devices= DEVICES", "[data_disk] type=disk devices=sd* disable_barriers=false", "[data_disk] type=disk devices=!sda1, !sda2 disable_barriers=false", "[ TYPE ] devices= DEVICES", "[disk] devices=sdb* disable_barriers=false", "[script] script=USD{i:PROFILE_DIR}/script.sh", "cmdline= arg1 arg2 ... argN", "[bootloader] cmdline=quiet", "[bootloader] cmdline=isolcpus=2", "[service] service.sendmail=start,enable,file:USD{i:PROFILE_DIR}/tuned-sendmail.conf", "[scheduler] isolated_cores=2-4 ps_blacklist=.*pmd.*;.*PMD.*;^DPDK;.*qemu-kvm.*", "[scheduler] isolated_cores=1,3 default_irq_smp_affinity=0,2", "group. groupname = rule_prio : sched : prio : affinity : regex", "group. groupname =", "[scheduler] group.kthreads=0:*:1:*:\\[.*\\]USD group.watchdog=0:f:99:*:\\[watchdog.*\\]", "[scheduler] runtime=0 isolated_cores=1,3", "[scheduler] cgroup_mount_point=/sys/fs/cgroup/cpuset cgroup_mount_point_init=1 cgroup_groups_init=1 cgroup_for_isolated_cores=group cgroup.group1=2 cgroup.group2=0,2 group.ksoftirqd=0:f:2:cgroup.group1:ksoftirqd.* ps_blacklist=ksoftirqd.*;rcuc.*;rcub.*;ktimersoftd.* isolated_cores=1", "10:hugetlb:/,9:perf_event:/,8:blkio:/", "[scheduler] isolated_cores=1 cgroup_ps_blacklist=:/daemons\\b", "[scheduler] isolated_cores=1 cgroup_ps_blacklist=\\b8:blkio:", "[sysctl] kernel.sched_migration_cost_ns=500000", "[scheduler] sched_migration_cost_ns=500000", "[variables] variable_name = value", "USD{ variable_name }", "[variables] isolated_cores=1,2 [bootloader] cmdline=isolcpus=USD{isolated_cores}", "[variables] include=/etc/tuned/ my-variables.conf [bootloader] cmdline=isolcpus=USD{isolated_cores}", "USD{f: function_name : argument_1 : argument_2 }", "USD{i:PROFILE_DIR}", "[variables] non_isolated_cores=0,3-5 [bootloader] cmdline=isolcpus=USD{f:cpulist_invert:USD{non_isolated_cores}}", "mkdir /etc/tuned/ my-profile", "[main] summary=General non-specialized TuneD profile [cpu] governor=conservative energy_perf_bias=normal [audio] timeout=10 [video] radeon_powersave=dpm-balanced, auto [scsi_host] alpm=medium_power", "tuned-adm profile my-profile", "tuned-adm active Current active profile: my-profile", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.", "mkdir /etc/tuned/ modified-profile", "[main] include= parent-profile", "[main] include=throughput-performance [sysctl] vm.swappiness=5", "tuned-adm profile modified-profile", "tuned-adm active Current active profile: my-profile", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.", "tuned-adm active", "mkdir /etc/tuned/ my-profile", "udevadm info --query=property --name=/dev/ device | grep -E '(WWN|SERIAL)' ID_WWN= 0x5002538d00000000_ ID_SERIAL= Generic-_SD_MMC_20120501030900000-0:0 ID_SERIAL_SHORT= 20120501030900000", "[main] include= existing-profile", "[disk] devices_udev_regex= IDNAME = device system unique id elevator= selected-scheduler", "devices_udev_regex=(ID_WWN= 0x5002538d00000000 )|(ID_WWN= 0x1234567800000000 )", "tuned-adm profile my-profile", "tuned-adm active Current active profile: my-profile", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details.", "cat /sys/block/ device /queue/scheduler [mq-deadline] kyber bfq none" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/customizing-tuned-profiles_monitoring-and-managing-system-status-and-performance
4.2. Live Migration and Red Hat Enterprise Linux Version Compatibility
4.2. Live Migration and Red Hat Enterprise Linux Version Compatibility Live Migration is supported as shown in table Table 4.1, "Live Migration Compatibility" : Table 4.1. Live Migration Compatibility Migration Method Release Type Example Live Migration Support Notes Forward Major release 5.x 6.y Not supported Forward Minor release 5.x 5.y (y>x, x>=4) Fully supported Any issues should be reported Forward Minor release 6.x 6.y (y>x, x>=0) Fully supported Any issues should be reported Backward Major release 6.x 5.y Not supported Backward Minor release 5.x 5.y (x>y,y>=4) Supported Refer to Troubleshooting problems with migration for known issues Backward Minor release 6.x 6.y (x>y, y>=0) Supported Refer to Troubleshooting problems with migration for known issues Troubleshooting problems with migration Issues with SPICE - It has been found that SPICE has an incompatible change when migrating from Red Hat Enterprise Linux 6.0 6.1. In such cases, the client may disconnect and then reconnect, causing a temporary loss of audio and video. This is only temporary and all services will resume. Issues with USB - Red Hat Enterprise Linux 6.2 added USB functionality which included migration support, but not without certain caveats which reset USB devices and caused any application running over the device to abort. This problem was fixed in Red Hat Enterprise Linux 6.4, and should not occur in future versions. To prevent this from happening in a version prior to 6.4, abstain from migrating while USB devices are in use. Issues with the migration protocol - If backward migration ends with "unknown section error", repeating the migration process can repair the issue as it may be a transient error. If not, please report the problem. Configuring Network Storage Configure shared storage and install a guest virtual machine on the shared storage. Alternatively, use the NFS example in Section 4.3, "Shared Storage Example: NFS for a Simple Migration"
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Live_migration_and_RHEL_compatibility
Chapter 1. Preparing to install on Azure
Chapter 1. Preparing to install on Azure 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on Azure Before installing OpenShift Container Platform on Microsoft Azure, you must configure an Azure account. See Configuring an Azure account for details about account configuration, account limits, public DNS zone configuration, required roles, creating service principals, and supported Azure regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Alternatives to storing administrator-level secrets in the kube-system project for other options. 1.3. Choosing a method to install OpenShift Container Platform on Azure You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Azure : You can install OpenShift Container Platform on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Azure : You can install a customized cluster on Azure infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on Azure with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on Azure into an existing VNet : You can install OpenShift Container Platform on an existing Azure Virtual Network (VNet) on Azure. You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on Azure : You can install a private cluster into an existing Azure Virtual Network (VNet) on Azure. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on Azure into a government region : OpenShift Container Platform can be deployed into Microsoft Azure Government (MAG) regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure infrastructure that you provision, by using the following method: Installing a cluster on Azure using ARM templates : You can install OpenShift Container Platform on Azure by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure account
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_azure/preparing-to-install-on-azure
6.4. VMware
6.4. VMware open-vm-tools To enhance performance and user experience when running Red Hat Enterprise Linux 7 as the guest on VMware ESX, Red Hat Enterprise Linux 7 includes the latest stable release of open-vm-tools .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-virtualization-vmware
Chapter 9. Importing projects from Git repositories
Chapter 9. Importing projects from Git repositories Git is a distributed version control system. It implements revisions as commit objects. When you save your changes to a repository, a new commit object in the Git repository is created. Business Central uses Git to store project data, including assets such as rules and processes. When you create a project in Business Central, it is added to a Git repository that is connected to Business Central. If you have projects in Git repositories, you can import the project's master branch or import the master branch along with other specific branches into the Business Central Git repository through Business Central spaces. Prerequisites Red Hat Process Automation Manager projects exist in an external Git repository. You have the credentials required for read access to that external Git repository. Procedure In Business Central, go to Menu Design Projects . Select or create the space into which you want to import the projects. The default space is MySpace . In the upper-right corner of the screen, click the arrow to Add Project and select Import Project . In the Import Project window, enter the URL and credentials for the Git repository that contains the project that you want to import and click Import . The Import Projects page is displayed. Optional: To import master and specific branches, do the following tasks: On the Import Projects page, click the branches icon. In the Branches to be imported window, select branches from the list. Note You must select the master branch as a minimum. Click Ok . On the Import Projects page, ensure the project is highlighted and click Ok .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/git-import-project
3.6. Runtime Device Power Management
3.6. Runtime Device Power Management Runtime device power management (RDPM) helps to reduce power consumption with minimum user-visible impact. If a device has been idle for a sufficient time and the RDPM hardware support exists in both the device and driver, the device is put into a lower power state. The recovery from the lower power state is assured by an external I/O event for this device, which triggers the kernel and the device driver to bring the device back to the running state. All this occurs automatically, as RDPM is enabled by default. Users are allowed to control RDPM of a device by setting the attribute in a particular RDPM configuration file. The RDPM configuration files for particular devices can be found in the /sys/devices/ device /power/ directory, where device replaces the path to the directory of a particular device. For example, to configure the RDPM for a CPU, access this directory: Bringing a device back from a lower power state to the running state adds additional latency to the I/O operation. The duration of that additional delay is device-specific. The configuration scheme described here allows the system administrator to disable RDPM on a device-by-device basis and to both examine and control some of the other parameters. Every /sys/devices/ device /power directory contains the following configuration files: control This file is used to enable or disable RDPM for a particular device. All devices have one of the following two values of the attribute in the control file: auto default for all devices, they may be subject to automatic RDPM, depending on their driver on prevents the driver from managing the device's power state at run time autosuspend_delay_ms This file controls the auto-suspend delay, which is the minimum time period of inactivity between idle state and suspending of the device. The file contains the auto-suspend delay value in milliseconds. A negative value prevents the device from being suspended at run time, thus having the same effect as setting the attribute in the /sys/devices/ device /power/control file to on . Values higher than 1000 are rounded up to the nearest second.
[ "/sys/devices/system/cpu/power/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/runtime_device_power_management
Chapter 2. Installing an Identity Management server using an Ansible playbook
Chapter 2. Installing an Identity Management server using an Ansible playbook Learn more about how to configure a system as an IdM server by using Ansible . Configuring a system as an IdM server establishes an IdM domain and enables the system to offer IdM services to IdM clients. You can manage the deployment by using the ipaserver Ansible role. Prerequisites You understand the general Ansible and IdM concepts. 2.1. Ansible and its advantages for installing IdM Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate installation tasks such as the setup of an IdM server, replica, client, or an entire IdM topology. Advantages of using Ansible to install IdM The following list presents advantages of installing Identity Management using Ansible in contrast to manual installation. You do not need to log into the managed node. You do not need to configure settings on each host to be deployed individually. Instead, you can have one inventory file to deploy a complete cluster. You can reuse an inventory file later for management tasks, for example to add users and hosts. You can reuse an inventory file even for such tasks as are not related to IdM. Additional resources Automating Red Hat Enterprise Linux Identity Management installation Planning Identity Management Preparing the system for IdM server installation 2.2. Installing the ansible-freeipa package Follow this procedure to install the ansible-freeipa package that provides Ansible roles and modules for installing and managing Identity Management (IdM) . Prerequisites Ensure that the controller is a Red Hat Enterprise Linux system with a valid subscription. If this is not the case, see the official Ansible documentation Installation guide for alternative installation instructions. Ensure that you can reach the managed node over the SSH protocol from the controller. Check that the managed node is listed in the /root/.ssh/known_hosts file of the controller. Procedure Use the following procedure on the Ansible controller. If your system is running on RHEL 8.5 and earlier, enable the required repository: If your system is running on RHEL 8.5 and earlier, install the ansible package: Install the ansible-freeipa package: The roles and modules are installed into the /usr/share/ansible/roles/ and /usr/share/ansible/plugins/modules directories. 2.3. Ansible roles location in the file system By default, the ansible-freeipa roles are installed to the /usr/share/ansible/roles/ directory. The structure of the ansible-freeipa package is as follows: The /usr/share/ansible/roles/ directory stores the ipaserver , ipareplica , and ipaclient roles on the Ansible controller. Each role directory stores examples, a basic overview, the license and documentation about the role in a README.md Markdown file. The /usr/share/doc/ansible-freeipa/ directory stores the documentation about individual roles and the topology in README.md Markdown files. It also stores the playbooks/ subdirectory. The /usr/share/doc/ansible-freeipa/playbooks/ directory stores the example playbooks: 2.4. Setting the parameters for a deployment with an integrated DNS and an integrated CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an integrated CA as the root CA in an environment that uses the IdM integrated DNS solution. Note The inventory in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Specify that you want to use integrated DNS by adding the following option: Specify the DNS forwarding settings. Choose one of the following options: Use the ipaserver_auto_forwarders=true option if you want the installer to use forwarders from the /etc/resolv.conf file. Do not use this option if the nameserver specified in the /etc/resolv.conf file is the localhost 127.0.0.1 address or if you are on a virtual private network and the DNS servers you are using are normally unreachable from the public internet. Use the ipaserver_forwarders option to specify your forwarders manually. The installation process adds the forwarder IP addresses to the /etc/named.conf file on the installed IdM server. Use the ipaserver_no_forwarders=true option to configure root DNS servers to be used instead. Note With no DNS forwarders, your environment is isolated, and names from other DNS domains in your infrastructure are not resolved. Specify the DNS reverse record and zone settings. Choose from the following options: Use the ipaserver_allow_zone_overlap=true option to allow the creation of a (reverse) zone even if the zone is already resolvable. Use the ipaserver_reverse_zones option to specify your reverse zones manually. Use the ipaserver_no_reverse=true option if you do not want the installer to create a reverse DNS zone. Note Using IdM to manage reverse zones is optional. You can use an external DNS service for this purpose instead. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Example playbook to set up an IdM server using admin and Directory Manager passwords stored in an Ansible Vault file Example playbook to set up an IdM server using admin and Directory Manager passwords from an inventory file Additional resources man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 2.5. Setting the parameters for a deployment with external DNS and an integrated CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an integrated CA as the root CA in an environment that uses an external DNS solution. Note The inventory file in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Make sure that the ipaserver_setup_dns option is set to no or that it is absent. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Example playbook to set up an IdM server using admin and Directory Manager passwords stored in an Ansible Vault file Example playbook to set up an IdM server using admin and Directory Manager passwords from an inventory file Additional resources man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 2.6. Deploying an IdM server with an integrated CA as the root CA using an Ansible playbook Complete this procedure to deploy an IdM server with an integrated certificate authority (CA) as the root CA using an Ansible playbook. Prerequisites The managed node is a Red Hat Enterprise Linux 8 system with a static IP address and a working package manager. You have set the parameters that correspond to your scenario by choosing one of the following procedures: Procedure with integrated DNS Procedure with external DNS Procedure Run the Ansible playbook: Choose one of the following options: If your IdM deployment uses external DNS: add the DNS resource records contained in the /tmp/ipa.system.records.UFRPto.db file to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. If your IdM deployment uses integrated DNS: Add DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after an IdM DNS server is installed. Add an _ntp._udp service (SRV) record for your time server to your IdM DNS. The presence of the SRV record for the time server of the newly-installed IdM server in IdM DNS ensures that future replica and client installations are automatically configured to synchronize with the time server used by this primary IdM server. 2.7. Setting the parameters for a deployment with an integrated DNS and an external CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an external CA as the root CA in an environment that uses the IdM integrated DNS solution. Note The inventory file in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Specify that you want to use integrated DNS by adding the following option: Specify the DNS forwarding settings. Choose one of the following options: Use the ipaserver_auto_forwarders=true option if you want the installation process to use forwarders from the /etc/resolv.conf file. This option is not recommended if the nameserver specified in the /etc/resolv.conf file is the localhost 127.0.0.1 address or if you are on a virtual private network and the DNS servers you are using are normally unreachable from the public internet. Use the ipaserver_forwarders option to specify your forwarders manually. The installation process adds the forwarder IP addresses to the /etc/named.conf file on the installed IdM server. Use the ipaserver_no_forwarders=true option to configure root DNS servers to be used instead. Note With no DNS forwarders, your environment is isolated, and names from other DNS domains in your infrastructure are not resolved. Specify the DNS reverse record and zone settings. Choose from the following options: Use the ipaserver_allow_zone_overlap=true option to allow the creation of a (reverse) zone even if the zone is already resolvable. Use the ipaserver_reverse_zones option to specify your reverse zones manually. Use the ipaserver_no_reverse=true option if you do not want the installation process to create a reverse DNS zone. Note Using IdM to manage reverse zones is optional. You can use an external DNS service for this purpose instead. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM adds its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Create a playbook for the first step of the installation. Enter instructions for generating the certificate signing request (CSR) and copying it from the controller to the managed node. Create another playbook for the final step of the installation. Additional resources man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 2.8. Setting the parameters for a deployment with external DNS and an external CA as the root CA Complete this procedure to configure the inventory file for installing an IdM server with an external CA as the root CA in an environment that uses an external DNS solution. Note The inventory file in this procedure uses the INI format. You can, alternatively, use the YAML or JSON formats. Procedure Create a ~/MyPlaybooks/ directory: Create a ~/MyPlaybooks/inventory file. Open the inventory file for editing. Specify the fully-qualified domain names ( FQDN ) of the host you want to use as an IdM server. Ensure that the FQDN meets the following criteria: Only alphanumeric characters and hyphens (-) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Specify the IdM domain and realm information. Make sure that the ipaserver_setup_dns option is set to no or that it is absent. Specify the passwords for admin and for the Directory Manager . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file. Alternatively and less securely, specify the passwords directly in the inventory file. Optional: Specify a custom firewalld zone to be used by the IdM server. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of an inventory file with the required server information (excluding the passwords) Example of an inventory file with the required server information (including the passwords) Example of an inventory file with a custom firewalld zone Create a playbook for the first step of the installation. Enter instructions for generating the certificate signing request (CSR) and copying it from the controller to the managed node. Create another playbook for the final step of the installation. Additional resources Installing an IdM server: Without integrated DNS, with an external CA as the root CA man ipa-server-install(1) /usr/share/doc/ansible-freeipa/README-server.md 2.9. Deploying an IdM server with an external CA as the root CA using an Ansible playbook Complete this procedure to deploy an IdM server with an external certificate authority (CA) as the root CA using an Ansible playbook. Prerequisites The managed node is a Red Hat Enterprise Linux 8 system with a static IP address and a working package manager. You have set the parameters that correspond to your scenario by choosing one of the following procedures: Procedure with integrated DNS Procedure with external DNS Procedure Run the Ansible playbook with the instructions for the first step of the installation, for example install-server-step1.yml : Locate the ipa.csr certificate signing request file on the controller and submit it to the external CA. Place the IdM CA certificate signed by the external CA in the controller file system so that the playbook in the step can find it. Run the Ansible playbook with the instructions for the final step of the installation, for example install-server-step2.yml : Choose one of the following options: If your IdM deployment uses external DNS: add the DNS resource records contained in the /tmp/ipa.system.records.UFRPto.db file to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. If your IdM deployment uses integrated DNS: Add DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after an IdM DNS server is installed. Add an _ntp._udp service (SRV) record for your time server to your IdM DNS. The presence of the SRV record for the time server of the newly-installed IdM server in IdM DNS ensures that future replica and client installations are automatically configured to synchronize with the time server used by this primary IdM server. 2.10. Uninstalling an IdM server using an Ansible playbook Note In an existing Identity Management (IdM) deployment, replica and server are interchangeable terms. Complete this procedure to uninstall an IdM replica using an Ansible playbook. In this example: IdM configuration is uninstalled from server123.idm.example.com . server123.idm.example.com and the associated host entry are removed from the IdM topology. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. In this example, the FQDN is server123.idm.example.com . You have stored your ipaadmin_password in the secret.yml Ansible vault. For the ipaserver_remove_from_topology option to work, the system must be running on RHEL 8.9 or later. On the managed node: The system is running on RHEL 8. Procedure Create your Ansible playbook file uninstall-server.yml with the following content: The ipaserver_remove_from_domain option unenrolls the host from the IdM topology. Note If the removal of server123.idm.example.com should lead to a disconnected topology, the removal will be aborted. For more information, see Using an Ansible playbook to uninstall an IdM server even if this leads to a disconnected topology . Uninstall the replica: Ensure that all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. For more information on how to delete DNS records from IdM, see Deleting DNS records in the IdM CLI . 2.11. Using an Ansible playbook to uninstall an IdM server even if this leads to a disconnected topology Note In an existing Identity Management (IdM) deployment, replica and server are interchangeable terms. Complete this procedure to uninstall an IdM replica using an Ansible playbook even if this results in a disconnected IdM topology. In the example, server456.idm.example.com is used to remove the replica and the associated host entry with the FQDN of server123.idm.example.com from the topology, leaving certain replicas disconnected from server456.idm.example.com and the rest of the topology. Note If removing a replica from the topology using only the remove_server_from_domain does not result in a disconnected topology, no other options are required. If the result is a disconnected topology, you must specify which part of the domain you want to preserve. In that case, you must do the following: Specify the ipaserver_remove_on_server value. Set ipaserver_ignore_topology_disconnect to True. Prerequisites On the control node: You are using Ansible version 2.13 or later. The system is running on RHEL 8.9 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. In this example, the FQDN is server123.idm.example.com . You have stored your ipaadmin_password in the secret.yml Ansible vault. On the managed node: The system is running on 8 or later. Procedure Create your Ansible playbook file uninstall-server.yml with the following content: Note Under normal circumstances, if the removal of server123 does not result in a disconnected topology: if the value for ipaserver_remove_on_server is not set, the replica on which server123 is removed is automatically determined using the replication agreements of server123. Uninstall the replica: Ensure that all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. For more information on how to delete DNS records from IdM, see Deleting DNS records in the IdM CLI . 2.12. Additional resources Planning the replica topology Backing up and restoring IdM servers using Ansible playbooks Inventory basics: formats, hosts, and groups
[ "subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms", "yum install ansible", "yum install ansible-freeipa", "ls -1 /usr/share/ansible/roles/ ipaclient ipareplica ipaserver", "ls -1 /usr/share/doc/ansible-freeipa/ playbooks README-client.md README.md README-replica.md README-server.md README-topology.md", "ls -1 /usr/share/doc/ansible-freeipa/playbooks/ install-client.yml install-cluster.yml install-replica.yml install-server.yml uninstall-client.yml uninstall-cluster.yml uninstall-replica.yml uninstall-server.yml", "mkdir MyPlaybooks", "ipaserver_setup_dns=true", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaserver state: present", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true roles: - role: ipaserver state: present", "mkdir MyPlaybooks", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaserver state: present", "--- - name: Playbook to configure IPA server hosts: ipaserver become: true roles: - role: ipaserver state: present", "ansible-playbook -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server.yml", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "mkdir MyPlaybooks", "ipaserver_setup_dns=true", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone [...]", "--- - name: Playbook to configure IPA server Step 1 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_ca: true roles: - role: ipaserver state: present post_tasks: - name: Copy CSR /root/ipa.csr from node to \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" fetch: src: /root/ipa.csr dest: \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" flat: true", "--- - name: Playbook to configure IPA server Step 2 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_cert_files: - \"/root/servercert20240601.pem\" - \"/root/cacert.pem\" pre_tasks: - name: Copy \"{{ groups.ipaserver[0] }}-{{ item }}\" to \"/root/{{ item }}\" on node ansible.builtin.copy: src: \"{{ groups.ipaserver[0] }}-{{ item }}\" dest: \"/root/{{ item }}\" force: true with_items: - servercert20240601.pem - cacert.pem roles: - role: ipaserver state: present", "mkdir MyPlaybooks", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]", "[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone [...]", "--- - name: Playbook to configure IPA server Step 1 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_ca: true roles: - role: ipaserver state: present post_tasks: - name: Copy CSR /root/ipa.csr from node to \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" fetch: src: /root/ipa.csr dest: \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" flat: true", "--- - name: Playbook to configure IPA server Step 2 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_cert_files: - \"/root/servercert20240601.pem\" - \"/root/cacert.pem\" pre_tasks: - name: Copy \"{{ groups.ipaserver[0] }}-{{ item }}\" to \"/root/{{ item }}\" on node ansible.builtin.copy: src: \"{{ groups.ipaserver[0] }}-{{ item }}\" dest: \"/root/{{ item }}\" force: true with_items: - servercert20240601.pem - cacert.pem roles: - role: ipaserver state: present", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server-step1.yml", "ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server-step2.yml", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "--- - name: Playbook to uninstall an IdM replica hosts: ipaserver become: true roles: - role: ipaserver ipaserver_remove_from_domain: true state: absent", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/uninstall-server.yml", "--- - name: Playbook to uninstall an IdM replica hosts: ipaserver become: true roles: - role: ipaserver ipaserver_remove_from_domain: true ipaserver_remove_on_server: server456.idm.example.com ipaserver_ignore_topology_disconnect: true state: absent", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/uninstall-server.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/installing-an-Identity-Management-server-using-an-Ansible-playbook_using-ansible-to-install-and-manage-idm
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/release_notes/proc_providing-feedback-on-red-hat-documentation
14.6. Configuring the radvd daemon for IPv6 routers
14.6. Configuring the radvd daemon for IPv6 routers The router advertisement daemon ( radvd ) sends router advertisement messages which are required for IPv6 stateless autoconfiguration. This allows users to automatically configure their addresses, settings, routes and choose a default router based on these advertisements. To configure the radvd daemon: Install the radvd daemon: Set up the /etc/radvd.conf file. For example: Note If you want to additionally advertise DNS resolvers along with the router advertisements, add the RDNSS <ip> <ip> <ip> { }; option in the /etc/radvd.conf file. To configure a DHCPv6 service for your subnets, you can set the AdvManagedFlag to on , so the router advertisements allow clients to automatically obtain an IPv6 address when a DHCPv6 service is available. For more details on configuring the DHCPv6 service, see Section 14.5, "DHCP for IPv6 (DHCPv6)" Enable the radvd daemon: Start the radvd daemon immediately: To display the content of router advertisement packages and the configured values sent by the radvd daemon, use the radvdump command: For more information on the radvd daemon, see the radvd(8) , radvd.conf(5) , radvdump(8) man pages.
[ "~]# sudo yum install radvd", "interface enp1s0 { AdvSendAdvert on; MinRtrAdvInterval 30; MaxRtrAdvInterval 100; prefix 2001:db8:1:0::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; };", "~]# sudo systemctl enable radvd.service", "~]# sudo systemctl start radvd.service", "~]# radvdump Router advertisement from fe80::280:c8ff:feb9:cef9 (hoplimit 255) AdvCurHopLimit: 64 AdvManagedFlag: off AdvOtherConfigFlag: off AdvHomeAgentFlag: off AdvReachableTime: 0 AdvRetransTimer: 0 Prefix 2002:0102:0304:f101::/64 AdvValidLifetime: 30 AdvPreferredLifetime: 20 AdvOnLink: off AdvAutonomous: on AdvRouterAddr: on Prefix 2001:0db8:100:f101::/64 AdvValidLifetime: 2592000 AdvPreferredLifetime: 604800 AdvOnLink: on AdvAutonomous: on AdvRouterAddr: on AdvSourceLLAddress: 00 80 12 34 56 78" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_the_radvd_daemon_for_IPv6_routers
3.15. RHEA-2011:1622 - new package: python-suds
3.15. RHEA-2011:1622 - new package: python-suds A new python-suds package is now available for Red Hat Enterprise Linux 6. The python-suds package provides a lightweight implementation of the Simple Object Access Protocol (SOAP) for the Python programming environment. This enhancement update adds the python-suds package to Red Hat Enterprise Linux 6. (BZ# 681835 ) All users who require python-suds are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-suds_z
Chapter 1. Introduction to data security
Chapter 1. Introduction to data security Security is an important concern and should be a strong focus of any Red Hat Ceph Storage deployment. Data breaches and downtime are costly and difficult to manage, laws may require passing audits and compliance processes, and projects have an expectation of a certain level of data privacy and security. This document provides a general introduction to security for Red Hat Ceph Storage, as well as the role of Red Hat in supporting your system's security. 1.1. Preface This document provides advice and good practice information for hardening the security of Red Hat Ceph Storage, with a focus on the Ceph Orchestrator using cephadm for Red Hat Ceph Storage deployments. While following the instructions in this guide will help harden the security of your environment, we do not guarantee security or compliance from following these recommendations. 1.2. Introduction to Red Hat Ceph Storage Red Hat Ceph Storage (RHCS) is a highly scalable and reliable object storage solution, which is typically deployed in conjunction with cloud computing solutions like OpenStack, as a standalone storage service, or as network attached storage using interfaces. All RHCS deployments consist of a storage cluster commonly referred to as the Ceph Storage Cluster or RADOS (Reliable Autonomous Distributed Object Store), which consists of three types of daemons: Ceph Monitors ( ceph-mon ): Ceph monitors provide a few critical functions such as establishing an agreement about the state of the cluster, maintaining a history of the state of the cluster such as whether an OSD is up and running and in the cluster, providing a list of pools through which clients write and read data, and providing authentication for clients and the Ceph Storage Cluster daemons. Ceph Managers ( ceph-mgr ): Ceph manager daemons track the status of peering between copies of placement groups distributed across Ceph OSDs, a history of the placement group states, and metrics about the Ceph cluster. They also provide interfaces for external monitoring and management systems. Ceph OSDs ( ceph-osd ): Ceph Object Storage Daemons (OSDs) store and serve client data, replicate client data to secondary Ceph OSD daemons, track and report to Ceph Monitors on their health and on the health of neighboring OSDs, dynamically recover from failures, and backfill data when the cluster size c hanges, among other functions. All RHCS deployments store end-user data in the Ceph Storage Cluster or RADOS (Reliable Autonomous Distributed Object Store). Generally, users DO NOT interact with the Ceph Storage Cluster directly; rather, they interact with a Ceph client. There are three primary Ceph Storage Cluster clients: Ceph Object Gateway ( radosgw ): The Ceph Object Gateway, also known as RADOS Gateway, radosgw or rgw provides an object storage service with RESTful APIs. Ceph Object Gateway stores data on behalf of its clients in the Ceph Storage Cluster or RADOS. Ceph Block Device ( rbd ): The Ceph Block Device provides copy-on-write, thin-provisioned, and cloneable virtual block devices to a Linux kernel via Kernel RBD ( krbd ) or to cloud computing solutions like OpenStack via librbd . Ceph File System ( cephfs ): The Ceph File System consists of one or more Metadata Servers ( mds ), which store the inode portion of a file system as objects on the Ceph Storage Cluster. Ceph file systems can be mounted via a kernel client, a FUSE client, or via the libcephfs library for cloud computing solutions like OpenStack. Additional clients include librados , which enables developers to create custom applications to interact with the Ceph Storage cluster and command line interface clients for administrative purposes. 1.3. Supporting Software An important aspect of Red Hat Ceph Storage security is to deliver solutions that have security built-in upfront, that Red Hat supports over time. Specific steps which Red Hat takes with Red Hat Ceph Storage include: Maintaining upstream relationships and community involvement to help focus on security from the start. Selecting and configuring packages based on their security and performance track records. Building binaries from associated source code (instead of simply accepting upstream builds). Applying a suite of inspection and quality assurance tools to prevent an extensive array of potential security issues and regressions. Digitally signing all released packages and distributing them through cryptographically authenticated distribution channels. Providing a single, unified mechanism for distributing patches and updates. In addition, Red Hat maintains a dedicated security team that analyzes threats and vulnerabilities against our products, and provides relevant advice and updates through the Customer Portal. This team determines which issues are important, as opposed to those that are mostly theoretical problems. The Red Hat Product Security team maintains expertise in, and makes extensive contributions to the upstream communities associated with our subscription products. A key part of the process, Red Hat Security Advisories, deliver proactive notification of security flaws affecting Red Hat solutions, along with patches that are frequently distributed on the same day the vulnerability is first published.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/data_security_and_hardening_guide/introduction-to-data-security
Chapter 4. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift
Chapter 4. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster. 4.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services: The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage. The cinder-api service has high network usage due to resource listing requests. The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. The cinder-backup service has high memory, network, and CPU requirements. Therefore, you can pin the cinder-api , cinder-volume , and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity. Additional resources Placing pods on specific nodes using node selectors Machine configuration overview Node Feature Discovery Operator 4.2. Creating a storage class You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. Use the Logical Volume Manager (LVM) Storage storage class with RHOSO. You specify this storage class as the cluster storage back end for the RHOSO deployment. Use a storage back end based on SSD or NVMe drives for the storage class. If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes. To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage": The storage is ready when this command returns three non-zero values For more information about how to configure the LVM Storage storage class, see Persistent storage using Logical Volume Manager Storage in the RHOCP Storage guide. 4.3. Creating the openstack namespace You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment. Prerequisites You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. Procedure Create the openstack project for the deployed RHOSO environment: Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators: If the security context constraint (SCC) is not "privileged", use the following commands to change it: Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack : 4.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. Warning You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage. Procedure Create a Secret CR file on your workstation, for example, openstack_service_secret.yaml . Add the following initial configuration to openstack_service_secret.yaml : Replace <base64_password> with a 32-character key that is base64 encoded. You can use the following command to manually generate a base64 encoded password: Alternatively, if you are using a Linux workstation and you are generating the Secret CR definition file by using a Bash command such as cat , you can replace <base64_password> with the following command to auto-generate random passwords for each service: Replace the <base64_fernet_key> with a fernet key that is base64 encoded. You can use the following command to manually generate the fernet key: Note The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32. Create the Secret CR in the cluster: Verify that the Secret CR is created:
[ "oc get node -l \"topology.topolvm.io/node in (USD(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\\n' ',' | sed 's/.\\{1\\}USD//'))\" -o=jsonpath='{.items[*].metadata.annotations.capacity\\.topolvm\\.io/local-storage}' | tr ' ' '\\n'", "oc new-project openstack", "oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { \"kubernetes.io/metadata.name\": \"openstack\", \"pod-security.kubernetes.io/enforce\": \"privileged\", \"security.openshift.io/scc.podSecurityLabelSync\": \"false\" }", "oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite", "oc project openstack", "apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> AodhDatabasePassword: <base64_password> BarbicanDatabasePassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderDatabasePassword: <base64_password> CinderPassword: <base64_password> DatabasePassword: <base64_password> DbRootPassword: <base64_password> DesignateDatabasePassword: <base64_password> DesignatePassword: <base64_password> GlanceDatabasePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatDatabasePassword: <base64_password> HeatPassword: <base64_password> IronicDatabasePassword: <base64_password> IronicInspectorDatabasePassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> KeystoneDatabasePassword: <base64_password> ManilaDatabasePassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronDatabasePassword: <base64_password> NeutronPassword: <base64_password> NovaAPIDatabasePassword: <base64_password> NovaAPIMessageBusPassword: <base64_password> NovaCell0DatabasePassword: <base64_password> NovaCell0MessageBusPassword: <base64_password> NovaCell1DatabasePassword: <base64_password> NovaCell1MessageBusPassword: <base64_password> NovaPassword: <base64_password> OctaviaDatabasePassword: <base64_password> OctaviaPassword: <base64_password> PlacementDatabasePassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque", "echo -n <password> | base64", "USD(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)", "python3 -c \"from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))\" | base64", "oc create -f openstack_service_secret.yaml -n openstack", "oc describe secret osp-secret -n openstack" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_dynamic_routing_environment/assembly_preparing-rhocp-for-rhoso
Chapter 10. Installing a cluster on AWS into a government region
Chapter 10. Installing a cluster on AWS into a government region In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) into a government region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 10.2. AWS government regions OpenShift Container Platform supports deploying a cluster to an AWS GovCloud (US) region. The following AWS GovCloud partitions are supported: us-gov-east-1 us-gov-west-1 10.3. Installation requirements Before you can install the cluster, you must: Provide an existing private AWS VPC and subnets to host the cluster. Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region. Manually create the installation configuration file ( install-config.yaml ). 10.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS GovCloud Region. Therefore, clusters must be private if they are deployed to an AWS GovCloud Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 10.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 10.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 10.5. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 10.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 10.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 10.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 10.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 10.6. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 10.8. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace worker nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS region. When creating the installation configuration file, ensure that you select the same AWS region that you specified when configuring your subscription. 10.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 10.10. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 10.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 10.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 10.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 10.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 10.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 10.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 10.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 10.10.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 10.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. Valid subnet IDs. 10.10.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 10.10.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 10.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 10.10.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 10.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 10.10.5. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 10.10.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 10.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 10.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 10.16. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-government-region
Chapter 45. Storage
Chapter 45. Storage LVM RAID-level takeover is now available RAID-level takeover, the ability to switch between RAID types, is now available as a Technology Preview. With RAID-level takeover, the user can decide based on their changing hardware characteristics what type of RAID configuration best suits their needs and make the change without having to deactivate the logical volume. For example, if a striped logical volume is created, it can be later converted to a RAID4 logical volume if an additional device is available. Starting with Red Hat Enterprise Linux 7.3, the following conversions are available as a Technology Preview: striped <-> RAID4 linear <-> RAID1 mirror <-> RAID1 (mirror is a legacy type, but still supported) (BZ# 1191630 ) Multi-queue I/O scheduling for SCSI Red Hat Enterprise Linux 7 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq. The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line. (BZ#1109348) Targetd plug-in from the libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. The Targetd plug-in is not fully supported and remains a Technology Preview. (BZ#1119909) Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is a new addition to the SCSI Standard. It is fully supported in Red Hat Enterprise Linux 7.3 for the HBAs and storage arrays specified in the Features chapter, but it remains in Technology Preview for all other HBAs and storage arrays. DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. (BZ#1072107)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/technology_previews_storage
Chapter 7. DaemonSet [apps/v1]
Chapter 7. DaemonSet [apps/v1] Description DaemonSet represents the configuration of a daemon set. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DaemonSetSpec is the specification of a daemon set. status object DaemonSetStatus represents the current status of a daemon set. 7.1.1. .spec Description DaemonSetSpec is the specification of a daemon set. Type object Required selector template Property Type Description minReadySeconds integer The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). revisionHistoryLimit integer The number of old history to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector A label query over pods that are managed by the daemon set. Must match in order to be controlled. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template PodTemplateSpec An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template updateStrategy object DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet. 7.1.2. .spec.updateStrategy Description DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of daemon set rolling update. type string Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is RollingUpdate. Possible enum values: - "OnDelete" Replace the old daemons only when it's killed - "RollingUpdate" Replace the old daemons by new ones using rolling update i.e replace them on each node one after the other. 7.1.3. .spec.updateStrategy.rollingUpdate Description Spec to control the desired behavior of daemon set rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption. maxUnavailable IntOrString The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxSurge is 0 Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update. 7.1.4. .status Description DaemonSetStatus represents the current status of a daemon set. Type object Required currentNumberScheduled numberMisscheduled desiredNumberScheduled numberReady Property Type Description collisionCount integer Count of hash collisions for the DaemonSet. The DaemonSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision. conditions array Represents the latest available observations of a DaemonSet's current state. conditions[] object DaemonSetCondition describes the state of a DaemonSet at a certain point. currentNumberScheduled integer The number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ desiredNumberScheduled integer The total number of nodes that should be running the daemon pod (including nodes correctly running the daemon pod). More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ numberAvailable integer The number of nodes that should be running the daemon pod and have one or more of the daemon pod running and available (ready for at least spec.minReadySeconds) numberMisscheduled integer The number of nodes that are running the daemon pod, but are not supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ numberReady integer numberReady is the number of nodes that should be running the daemon pod and have one or more of the daemon pod running with a Ready Condition. numberUnavailable integer The number of nodes that should be running the daemon pod and have none of the daemon pod running and available (ready for at least spec.minReadySeconds) observedGeneration integer The most recent generation observed by the daemon set controller. updatedNumberScheduled integer The total number of nodes that are running updated daemon pod 7.1.5. .status.conditions Description Represents the latest available observations of a DaemonSet's current state. Type array 7.1.6. .status.conditions[] Description DaemonSetCondition describes the state of a DaemonSet at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of DaemonSet condition. 7.2. API endpoints The following API endpoints are available: /apis/apps/v1/daemonsets GET : list or watch objects of kind DaemonSet /apis/apps/v1/watch/daemonsets GET : watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/daemonsets DELETE : delete collection of DaemonSet GET : list or watch objects of kind DaemonSet POST : create a DaemonSet /apis/apps/v1/watch/namespaces/{namespace}/daemonsets GET : watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} DELETE : delete a DaemonSet GET : read the specified DaemonSet PATCH : partially update the specified DaemonSet PUT : replace the specified DaemonSet /apis/apps/v1/watch/namespaces/{namespace}/daemonsets/{name} GET : watch changes to an object of kind DaemonSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status GET : read status of the specified DaemonSet PATCH : partially update status of the specified DaemonSet PUT : replace status of the specified DaemonSet 7.2.1. /apis/apps/v1/daemonsets Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind DaemonSet Table 7.2. HTTP responses HTTP code Reponse body 200 - OK DaemonSetList schema 401 - Unauthorized Empty 7.2.2. /apis/apps/v1/watch/daemonsets Table 7.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/apps/v1/namespaces/{namespace}/daemonsets Table 7.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DaemonSet Table 7.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.8. Body parameters Parameter Type Description body DeleteOptions schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind DaemonSet Table 7.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK DaemonSetList schema 401 - Unauthorized Empty HTTP method POST Description create a DaemonSet Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body DaemonSet schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 202 - Accepted DaemonSet schema 401 - Unauthorized Empty 7.2.4. /apis/apps/v1/watch/namespaces/{namespace}/daemonsets Table 7.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} Table 7.18. Global path parameters Parameter Type Description name string name of the DaemonSet namespace string object name and auth scope, such as for teams and projects Table 7.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DaemonSet Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.21. Body parameters Parameter Type Description body DeleteOptions schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DaemonSet Table 7.23. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DaemonSet Table 7.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.25. Body parameters Parameter Type Description body Patch schema Table 7.26. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DaemonSet Table 7.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.28. Body parameters Parameter Type Description body DaemonSet schema Table 7.29. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty 7.2.6. /apis/apps/v1/watch/namespaces/{namespace}/daemonsets/{name} Table 7.30. Global path parameters Parameter Type Description name string name of the DaemonSet namespace string object name and auth scope, such as for teams and projects Table 7.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind DaemonSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.7. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status Table 7.33. Global path parameters Parameter Type Description name string name of the DaemonSet namespace string object name and auth scope, such as for teams and projects Table 7.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DaemonSet Table 7.35. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DaemonSet Table 7.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.37. Body parameters Parameter Type Description body Patch schema Table 7.38. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DaemonSet Table 7.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.40. Body parameters Parameter Type Description body DaemonSet schema Table 7.41. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/daemonset-apps-v1
Chapter 2. Understanding networking
Chapter 2. Understanding networking Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections: Service types, such as node ports or load balancers API resources, such as Ingress and Route By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. Note Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 169.254.0.0/16 CIDR block. This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the spec.hostNetwork field in the pod spec to true . If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure. 2.1. OpenShift Container Platform DNS If you are running multiple services, such as front-end and back-end services for use with multiple pods, environment variables are created for user names, service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the front-end pods as an environment variable. For this reason, OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. 2.2. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 2.2.1. Comparing routes and Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on HAProxy , which is an open source load balancer solution. The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In OpenShift Container Platform, routes are generated to meet the conditions specified by the Ingress resource. 2.3. Glossary of common terms for OpenShift Container Platform networking This glossary defines common terms that are used in the networking content. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. AWS Load Balancer Operator The AWS Load Balancer (ALB) Operator deploys and manages an instance of the aws-load-balancer-controller . Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. custom resource (CR) A CR is extension of the Kubernetes API. You can create custom resources. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. deployment A Kubernetes resource object that maintains the life cycle of an application. domain Domain is a DNS name serviced by the Ingress Controller. egress The process of data sharing externally through a network's outbound traffic from a pod. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. HTTP-based route An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. Ingress Controller The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. kube-proxy Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. load balancers OpenShift Container Platform uses load balancers for communicating from outside the cluster with services running in the cluster. MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. multicast With IP multicast, data is broadcast to many IP addresses simultaneously. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of a OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift Container Platform Ingress Operator The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform services. pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. PTP Operator The PTP Operator creates and manages the linuxptp services. route The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. scaling Increasing or decreasing the resource capacity. service Exposes a running application on a set of pods. Single Root I/O Virtualization (SR-IOV) Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. software-defined networking (SDN) OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. Stream Control Transmission Protocol (SCTP) SCTP is a reliable message based protocol that runs on top of an IP network. taint Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. web console A user interface (UI) to manage OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/understanding-networking
Chapter 2. Installing
Chapter 2. Installing Installing the Red Hat build of OpenTelemetry involves the following steps: Installing the Red Hat build of OpenTelemetry Operator. Creating a namespace for an OpenTelemetry Collector instance. Creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance. 2.1. Installing the Red Hat build of OpenTelemetry from the web console You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Install the Red Hat build of OpenTelemetry Operator: Go to Operators OperatorHub and search for Red Hat build of OpenTelemetry Operator . Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat Install Install View Operator . Important This installs the Operator with the default presets: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-operators Update approval Automatic In the Details tab of the installed Operator page, under ClusterServiceVersion details , verify that the installation Status is Succeeded . Create a project of your choice for the OpenTelemetry Collector instance that you will create in the step by going to Home Projects Create Project . Create an OpenTelemetry Collector instance. Go to Operators Installed Operators . Select OpenTelemetry Collector Create OpenTelemetry Collector YAML view . In the YAML view , customize the OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] 1 For details, see the "Receivers" page. 2 For details, see the "Processors" page. 3 For details, see the "Exporters" page. Select Create . Verification Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance. Go to Operators Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the OpenTelemetry Collector instance are running. 2.2. Installing the Red Hat build of OpenTelemetry by using the CLI You can install the Red Hat build of OpenTelemetry from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Install the Red Hat build of OpenTelemetry Operator: Create a project for the Red Hat build of OpenTelemetry Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: "true" name: openshift-opentelemetry-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Check the Operator status by running the following command: USD oc get csv -n openshift-opentelemetry-operator Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step: To create a project without metadata, run the following command: USD oc new-project <project_of_opentelemetry_collector_instance> To create a project with metadata, run the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF Create an OpenTelemetry Collector instance in the project that you created for it. Note You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster. Customize the OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] 1 For details, see the "Receivers" page. 2 For details, see the "Processors" page. 3 For details, see the "Exporters" page. Apply the customized CR by running the following command: USD oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF Verification Verify that the status.phase of the OpenTelemetry Collector pod is Running and the conditions are type: Ready by running the following command: USD oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml Get the OpenTelemetry Collector service by running the following command: USD oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> 2.3. Using taints and tolerations To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes using nodeSelector and tolerations in OpenShift 4 2.4. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 2.5. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-opentelemetry-operator", "oc new-project <project_of_opentelemetry_collector_instance>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF", "oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml", "oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/install-otel
7.159. perl-Sys-Virt
7.159. perl-Sys-Virt 7.159.1. RHBA-2015:1387 - perl-Sys-Virt bug fix update Updated perl-Sys-Virt packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The Sys::Virt module provides a Perl XS binding to the libvirt virtual machine management APIs. This allows machines running within arbitrary virtualization containers to be managed with a consistent API. Bug Fixes BZ# 905836 Previously, using the libvirt-tck utility to display virtual CPU (VCPU) information only printed a part of the expected diagnostics. With this update, the get_vcpu_info() function handles VCPU flags properly, and libvirt-tck displays the full extent of the expected information. BZ# 908274 Prior to this update, using the libvirt-tck utility to find the parent device of a node device with no parent incorrectly returned a "libvirt error code: 0" error message. Now, it is valid for the virNodeDeviceGetParent() function to return NULL if the parent device is nonexistent, and the error message is no longer displayed. Users of perl-Sys-Virt are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-perl-sys-virt
Chapter 9. Upgrading RHACS Cloud Service
Chapter 9. Upgrading RHACS Cloud Service 9.1. Upgrading secured clusters in RHACS Cloud Service by using the Operator Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service. You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service. 9.1.1. Preparing to upgrade Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps: If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF . For more information, see "Changing the collection method". 9.1.1.1. Changing the collection method If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade. Procedure In the OpenShift Container Platform web console, go to the RHACS Operator page. In the top navigation menu, select Secured Cluster . Click the instance name, for example, stackrox-secured-cluster-services . Use one of the following methods to change the setting: In the Form view , under Per Node Settings Collector Settings Collection , select CORE_BPF . Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF , then change it to CORE_BPF . Click Save. Additional resources Updating installed Operators 9.1.2. Rolling back an Operator upgrade for secured clusters To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console. Note On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster. 9.1.2.1. Rolling back an Operator upgrade by using the CLI You can roll back the Operator version by using CLI commands. Procedure Delete the OLM subscription by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete subscription rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete subscription rhacs-operator Delete the cluster service version (CSV) by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator Install the latest version of the Operator on the rolled back channel. 9.1.2.2. Rolling back an Operator upgrade by using the web console You can roll back the Operator version by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Go to the Operators Installed Operators page. Click the RHACS Operator. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates. Install the latest version of the Operator on the rolled back channel. Additional resources Operator Lifecycle Manager workflow Manually approving a pending Operator update 9.1.3. Troubleshooting Operator upgrade issues Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator. 9.1.3.1. Central or Secured cluster fails to deploy When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue: If the Operator fails to deploy Secured Cluster If the Operator fails to apply CR changes to actual resources For Secured clusters, run the following command to check the conditions: USD oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . You can identify configuration errors from the conditions output: Example output Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs: oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1 1 If you use Kubernetes, enter kubectl instead of oc . 9.2. Upgrading secured clusters in RHACS Cloud Service by using Helm charts You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts. If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade command. 9.2.1. Updating the Helm chart repository You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes. Prerequisites You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository. You must be using Helm version 3.8.3 or newer. Procedure Update Red Hat Advanced Cluster Security for Kubernetes charts repository. USD helm repo update Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 9.2.2. Running the Helm upgrade command You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites You must have access to the values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate the values-private.yaml configuration file containing root certificates before proceeding with these commands. Procedure Run the helm upgrade command and specify the configuration files by using the -f option: USD helm upgrade -n stackrox stackrox-secured-cluster-services \ rhacs/secured-cluster-services --version <current-rhacs-version> \ 1 -f values-private.yaml 1 Use the -f option to specify the paths for your YAML configuration files. 9.2.3. Additional resources Installing RHACS Cloud Service on secured clusters by using Helm charts 9.3. Manually upgrading secured clusters in RHACS Cloud Service by using the roxctl CLI You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl CLI. Important You need to manually upgrade secured clusters only if you used the roxctl CLI to install the secured clusters. 9.3.1. Upgrading the roxctl CLI To upgrade the roxctl CLI to the latest version, you must uninstall your current version of the roxctl CLI and then install the latest version of the roxctl CLI. 9.3.1.1. Uninstalling the roxctl CLI You can uninstall the roxctl CLI binary on Linux by using the following procedure. Procedure Find and delete the roxctl binary: USD ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1 1 Depending on your environment, you might need administrator rights to delete the roxctl binary. 9.3.1.2. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 9.3.1.3. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 9.3.1.4. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 9.3.2. Upgrading all secured clusters manually Important To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters. To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions. 9.3.2.1. Updating other images You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Update the Sensor image: USD oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Compliance image: USD oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Collector image: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Note If you are using the collector slim image, run the following command instead: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version} Update the admission control image: USD oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 Important If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs). For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section. Additional resources Authenticating by using the roxctl CLI 9.3.2.2. Migrating SCCs during the manual upgrade By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters. Procedure List all of the RHACS services that are deployed on all secured clusters: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Example output Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #... In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps: Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content: Example 9.1. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command: USD oc -n stackrox create -f ./update-scs.yaml Important You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file. Delete the SCCs that are specific to RHACS: To delete the SCCs that are specific to all secured clusters, run the following command: USD oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor Important You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster. Verification Ensure that all the pods are using the correct SCCs by running the following command: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Compare the output with the following table: Component custom SCC New Red Hat OpenShift 4 SCC Central stackrox-central nonroot-v2 Central-db stackrox-central-db nonroot-v2 Scanner stackrox-scanner nonroot-v2 Scanner-db stackrox-scanner nonroot-v2 Admission Controller stackrox-admission-control restricted-v2 Collector stackrox-collector privileged Sensor stackrox-sensor restricted-v2 9.3.2.2.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Sensor deployment: USD oc -n stackrox edit deploy/sensor 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.2. Editing the GOMEMLIMIT environment variable for the Collector deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Collector deployment: USD oc -n stackrox edit deploy/collector 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Admission Controller deployment: USD oc -n stackrox edit deploy/admission-control 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.4. Verifying secured cluster upgrade After you have upgraded secured clusters, verify that the updated pods are working. Procedure Check that the new pods have deployed: USD oc get deploy,ds -n stackrox -o wide 1 1 If you use Kubernetes, enter kubectl instead of oc . USD oc get pod -n stackrox --watch 1 1 If you use Kubernetes, enter kubectl instead of oc . 9.3.3. Enabling RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner". Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.3","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' Additional resources Scanning RHCOS node hosts
[ "oc -n rhacs-operator delete subscription rhacs-operator", "kubectl -n rhacs-operator delete subscription rhacs-operator", "oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator", "kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator", "oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1", "Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed", "-n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1", "helm repo update", "helm search repo -l rhacs/", "helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --version <current-rhacs-version> \\ 1 -f values-private.yaml", "ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1", "oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1", "oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.6.3 1", "oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}", "oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3", "oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'", "Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control # Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector # Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh # Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -", "oc -n stackrox create -f ./update-scs.yaml", "oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor", "oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'", "oc -n stackrox edit deploy/sensor 1", "oc -n stackrox edit deploy/collector 1", "oc -n stackrox edit deploy/admission-control 1", "oc get deploy,ds -n stackrox -o wide 1", "oc get pod -n stackrox --watch 1", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/rhacs_cloud_service/upgrading-rhacs-cloud-service
17.17. Applying QoS to Your Virtual Network
17.17. Applying QoS to Your Virtual Network Quality of Service (QoS) refers to the resource control systems that guarantees an optimal experience for all users on a network, making sure that there is no delay, jitter, or packet loss. QoS can be application specific or user / group specific. See Section 23.17.8.14, "Quality of service (QoS)" for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-applying_qos_to_your_virtual_network
21.2.4. Preserving ACLs
21.2.4. Preserving ACLs The Red Hat Enterprise Linux 4 kernel provides ACL support for the ext3 file system and ext3 file systems mounted with the NFS or Samba protocols. Thus, if an ext3 file system has ACLs enabled for it and is NFS exported, and if the NFS client can read ACLs, they are used by the NFS client as well. For more information about mounting NFS file systems with ACLs, refer to Chapter 14, Access Control Lists .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/mounting_nfs_file_systems-preserving_acls
3.2.3. Preparing to convert a virtual machine running Windows
3.2.3. Preparing to convert a virtual machine running Windows Important virt-v2v does not support conversion of the Windows Recovery Console. If a virtual machine has a recovery console installed and VirtIO was enabled during conversion, attempting to boot the recovery console will result in a stop error. Windows XP x86 does not support the Windows Recovery Console on VirtIO systems, so there is no resolution to this. However, on Windows XP AMD64 and Windows 2003 (x86 and AMD64), the recovery console can be reinstalled after conversion. The re-installation procedure is the same as the initial installation procedure. It is not necessary to remove the recovery console first. Following re-installation, the recovery console will work as intended. Before a virtual machine running Windows can be converted, ensure that the following steps are completed. Install the libguestfs-winsupport package on the host running virt-v2v . This package provides support for NTFS, which is used by many Windows systems. The libguestfs-winsupport package is provided by the RHEL V2VWIN (v. 6 for 64-bit x86_64) channel. Ensure your system is subscribed to this channel, then run the following command as root: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. An error message similar to Example 3.2, "Error message when converting a Windows virtual machine without libguestfs-winsupport installed" will be shown. Example 3.2. Error message when converting a Windows virtual machine without libguestfs-winsupport installed Install the virtio-win package on the host running virt-v2v . This package provides paravirtualized block and network drivers for Windows guests. The virtio-win package is provided by the RHEL Server Supplementary (v. 6 64-bit x86_64) channel. Ensure your system is subscribed to this channel, then run the following command as root: If you attempt to convert a virtual machine running Windows without the virtio-win package installed, the conversion will fail. An error message similar to Example 3.3, "Error message when converting a Windows virtual machine without virtio-win installed" will be shown. Example 3.3. Error message when converting a Windows virtual machine without virtio-win installed Note When virtual machines running Windows are converted for output to Red Hat Enterprise Virtualization, post-processing of the virtual machine image will be performed by the Red Hat Enterprise Virtualization Manager to install updated drivers. See Section 7.2.2, "Configuration changes for Windows virtual machines" for details of the process. This step will be omitted when virtual machines running Windows are converted for output to libvirt.
[ "install libguestfs-winsupport", "No operating system could be detected inside this disk image. This may be because the file is not a disk image, or is not a virtual machine image, or because the OS type is not understood by virt-inspector. If you feel this is an error, please file a bug report including as much information about the disk image as possible.", "install virtio-win", "virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/i386/Win2008" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-V2V_Guide-Preparing_to_Convert_a_Virtual_Machine-Preparing_to_convert_a_virtual_machine_running_Windows
Cluster administration
Cluster administration Red Hat OpenShift Service on AWS 4 Configuring Red Hat OpenShift Service on AWS clusters Red Hat OpenShift Documentation Team
[ "oc run netcat-test --image=busybox -i -t --restart=Never --rm -- /bin/sh", "/ nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open sent 0, rcvd 0", "/ nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused sent 0, rcvd 0", "/ exit", "oc run netcat-test --image=busybox -i -t --restart=Never --rm -- /bin/sh", "/ nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open sent 0, rcvd 0", "/ nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused sent 0, rcvd 0", "/ exit", "You have new non-redundant VPN connections One or more of your vpn connections are not using both tunnels. This mode of operation is not highly available and we strongly recommend you configure your second tunnel. View your non-redundant VPN connections.", "rosa create cluster --cluster-name <cluster_name> --enable-autoscaling --interactive", "? Configure cluster-autoscaler (optional): [? for help] (y/N) y <enter>", "rosa create autoscaler --cluster=<mycluster> --interactive", "rosa create cluster --cluster-name <cluster_name> --enable-autoscaling", "rosa create cluster --cluster-name <cluster_name> --enable-autoscaling <parameter>", "rosa create autoscaler --cluster=<mycluster>", "rosa create autoscaler --cluster=<mycluster> <parameter>", "rosa edit autoscaler --cluster=<mycluster>", "rosa edit autoscaler --cluster=<mycluster> <parameter>", "rosa describe autoscaler --cluster=<mycluster>", "rosa delete autoscaler --cluster=<mycluster>", "rosa edit autoscaler -h --cluster=<mycluster>", "rosa edit autoscaler -h --cluster=<mycluster> <parameter>", "rosa describe autoscaler -h --cluster=<mycluster>", "rosa create machinepool --cluster=<cluster-name> --name=<machine_pool_id> --replicas=<replica_count> --instance-type=<instance_type> --labels=<key>=<value>,<key>=<value> --taints=<key>=<value>:<effect>,<key>=<value>:<effect> --use-spot-instances --spot-max-price=<price> --disk-size=<disk_size> --availability-zone=<availability_zone_name> --additional-security-group-ids <sec_group_id> --subnet <subnet_id>", "rosa describe cluster -c <cluster name>|grep \"Infra ID:\"", "Infra ID: mycluster-xqvj7", "rosa create machinepool --cluster=mycluster --name=mymachinepool --replicas=2 --instance-type=m5.xlarge --labels=app=db,tier=backend", "I: Machine pool 'mymachinepool' created successfully on cluster 'mycluster' I: To view all machine pools, run 'rosa list machinepools -c mycluster'", "rosa create machinepool --cluster=<cluster-name> --name=<machine_pool_id> --enable-autoscaling --min-replicas=<minimum_replica_count> --max-replicas=<maximum_replica_count> --instance-type=<instance_type> --labels=<key>=<value>,<key>=<value> --taints=<key>=<value>:<effect>,<key>=<value>:<effect> --availability-zone=<availability_zone_name> --use-spot-instances --spot-max-price=<price>", "rosa create machinepool --cluster=mycluster --name=mymachinepool --enable-autoscaling --min-replicas=3 --max-replicas=6 --instance-type=m5.xlarge --labels=app=db,tier=backend", "I: Machine pool 'mymachinepool' created successfully on cluster 'mycluster' I: To view all machine pools, run 'rosa list machinepools -c mycluster'", "rosa list machinepools --cluster=<cluster_name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES SPOT INSTANCES Default No 3 m5.xlarge us-east-1a, us-east-1b, us-east-1c N/A mymachinepool Yes 3-6 m5.xlarge app=db, tier=backend us-east-1a, us-east-1b, us-east-1c No", "rosa describe machinepool --cluster=<cluster_name> --machinepool=mymachinepool", "ID: mymachinepool Cluster ID: 27iimopsg1mge0m81l0sqivkne2qu6dr Autoscaling: Yes Replicas: 3-6 Instance type: m5.xlarge Labels: app=db, tier=backend Taints: Availability zones: us-east-1a, us-east-1b, us-east-1c Subnets: Spot instances: No Disk size: 300 GiB Security Group IDs:", "rosa create cluster --worker-disk-size=<disk_size>", "rosa create machinepool --cluster=<cluster_id> \\ 1 --disk-size=<disk_size> 2", "rosa delete machinepool -c=<cluster_name> <machine_pool_ID>", "? Are you sure you want to delete machine pool <machine_pool_ID> on cluster <cluster_name>? (y/N)", "rosa list machinepools --cluster=<cluster_name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES DISK SIZE SG IDs default No 2 m5.xlarge us-east-1a 300GiB sg-0e375ff0ec4a6cfa2 mp1 No 2 m5.xlarge us-east-1a 300GiB sg-0e375ff0ec4a6cfa2", "rosa edit machinepool --cluster=<cluster_name> --replicas=<replica_count> \\ 1 <machine_pool_id> 2", "rosa list machinepools --cluster=<cluster_name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES DISK SIZE SG IDs default No 2 m5.xlarge us-east-1a 300GiB sg-0e375ff0ec4a6cfa2 mp1 No 3 m5.xlarge us-east-1a 300GiB sg-0e375ff0ec4a6cfa2", "rosa list machinepools --cluster=<cluster_name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES SPOT INSTANCES Default No 2 m5.xlarge us-east-1a N/A db-nodes-mp No 2 m5.xlarge us-east-1a No", "rosa edit machinepool --cluster=<cluster_name> --replicas=<replica_count> \\ 1 --labels=<key>=<value>,<key>=<value> \\ 2 <machine_pool_id>", "rosa edit machinepool --cluster=mycluster --replicas=2 --labels=app=db,tier=backend db-nodes-mp", "I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'", "rosa edit machinepool --cluster=<cluster_name> --min-replicas=<minimum_replica_count> \\ 1 --max-replicas=<maximum_replica_count> \\ 2 --labels=<key>=<value>,<key>=<value> \\ 3 <machine_pool_id>", "rosa edit machinepool --cluster=mycluster --min-replicas=2 --max-replicas=3 --labels=app=db,tier=backend db-nodes-mp", "I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'", "rosa describe machinepool --cluster=<cluster_name> --machinepool=<machine-pool-name>", "ID: db-nodes-mp Cluster ID: <ID_of_cluster> Autoscaling: No Replicas: 2 Instance type: m5.xlarge Labels: app=db, tier=backend Taints: Availability zones: us-east-1a Subnets: Spot instances: No Disk size: 300 GiB Security Group IDs:", "rosa create machinepools --cluster=<name> --replicas=<replica_count> --name <mp_name> --tags='<key> <value>,<key> <value>' 1", "rosa create machinepools --cluster=mycluster --replicas 2 --tags='tagkey1 tagvalue1,tagkey2 tagvaluev2' I: Checking available instance types for machine pool 'mp-1' I: Machine pool 'mp-1' created successfully on cluster 'mycluster' I: To view the machine pool details, run 'rosa describe machinepool --cluster mycluster --machinepool mp-1' I: To view all machine pools, run 'rosa list machinepools --cluster mycluster'", "rosa describe machinepool --cluster=<cluster_name> --machinepool=<machinepool_name>", "ID: mp-1 Cluster ID: 2baiirqa2141oreotoivp4sipq84vp5g Autoscaling: No Replicas: 2 Instance type: m5.xlarge Labels: Taints: Availability zones: us-east-1a Subnets: Spot instances: No Disk size: 300 GiB Additional Security Group IDs: Tags: red-hat-clustertype=rosa, red-hat-managed=true, tagkey1=tagvalue1, tagkey2=tagvaluev2", "rosa list machinepools --cluster=<cluster_name>", "rosa edit machinepool --cluster=<cluster_name> --replicas=<replica_count> \\ 1 --taints=<key>=<value>:<effect>,<key>=<value>:<effect> \\ 2 <machine_pool_id>", "rosa edit machinepool --cluster=mycluster --replicas 2 --taints=key1=value1:NoSchedule,key2=value2:NoExecute db-nodes-mp", "I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'", "rosa edit machinepool --cluster=<cluster_name> --min-replicas=<minimum_replica_count> \\ 1 --max-replicas=<maximum_replica_count> \\ 2 --taints=<key>=<value>:<effect>,<key>=<value>:<effect> \\ 3 <machine_pool_id>", "rosa edit machinepool --cluster=mycluster --min-replicas=2 --max-replicas=3 --taints=key1=value1:NoSchedule,key2=value2:NoExecute db-nodes-mp", "I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'", "rosa describe machinepool --cluster=<cluster_name> --machinepool=<machinepool_name>", "ID: db-nodes-mp Cluster ID: <ID_of_cluster> Autoscaling: No Replicas: 2 Instance type: m5.xlarge Labels: Taints: key1=value1:NoSchedule, key2=value2:NoExecute Availability zones: us-east-1a Subnets: Spot instances: No Disk size: 300 GiB Security Group IDs:", "rosa create machinepool -c <cluster-name> -i", "I: Enabling interactive mode 1 ? Machine pool name: xx-lz-xx 2 ? Create multi-AZ machine pool: No 3 ? Select subnet for a single AZ machine pool (optional): Yes 4 ? Subnet ID: subnet-<a> (region-info) 5 ? Enable autoscaling (optional): No 6 ? Replicas: 2 7 I: Fetching instance types 8 ? disk-size (optional): 9", "rosa list machinepools --cluster=<cluster_name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES SUBNETS SPOT INSTANCES DISK SIZE SG IDs worker No 2 m5.xlarge us-east-2a No 300 GiB mp1 No 2 m5.xlarge us-east-2a No 300 GiB", "rosa edit machinepool --cluster=<cluster_name> <machinepool_ID> --enable-autoscaling --min-replicas=<number> --max-replicas=<number>", "rosa edit machinepool --cluster=mycluster mp1 --enable-autoscaling --min-replicas=2 --max-replicas=5", "rosa edit machinepool --cluster=<cluster_name> <machinepool_ID> --enable-autoscaling=false --replicas=<number>", "rosa edit machinepool --cluster=mycluster default --enable-autoscaling=false --replicas=3", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "rosa create kubeletconfig -c <cluster_name> --name <kubeletconfig_name> --pod-pids-limit=<value>", "rosa create kubeletconfig -c my-cluster --name set-high-pids --pod-pids-limit=16384", "rosa edit kubeletconfig -c <cluster_name> --name <kubeletconfig_name> --pod-pids-limit=<value>", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... True False False 4 4 4 0 4h42m", "rosa describe kubeletconfig --cluster=<cluster_name>", "Pod Pids Limit: 16384", "rosa delete kubeletconfig --cluster <cluster_name> --name <kubeletconfig_name>", "rosa describe kubeletconfig --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/cluster_administration/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/monitoring_tools_configuration_guide/proc_providing-feedback-on-red-hat-documentation
Chapter 3. LVM Administration Overview
Chapter 3. LVM Administration Overview This chapter provides an overview of the administrative procedures you use to configure LVM logical volumes. This chapter is intended to provide a general understanding of the steps involved. For specific step-by-step examples of common LVM configuration procedures, see Chapter 5, LVM Configuration Examples . For descriptions of the CLI commands you can use to perform LVM administration, see Chapter 4, LVM Administration with CLI Commands . Alternately, you can use the LVM GUI, which is described in Chapter 7, LVM Administration with the LVM GUI . 3.1. Creating LVM Volumes in a Cluster In order to enable the LVM volumes you are creating in a cluster, the cluster infrastructure must be running and the cluster must be quorate. Creating clustered logical volumes also requires changes to the lvm.conf file for cluster-wide locking. Other than this setup, creating LVM logical volumes in a clustered environment is identical to creating LVM logical volumes on a single node. There is no difference in the LVM commands themselves, or in the LVM GUI interface. In order to enable cluster-wide locking, you can run the lvmconf command, as follows: Running the lvmconf command modifies the lvm.conf file to specify the appropriate locking type for clustered volumes. Note Shared storage for use in Red Hat Cluster Suite requires that you be running the cluster logical volume manager daemon ( clvmd ) or the High Availability Logical Volume Management agents (HA-LVM). If you are not able to use either the clvmd daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative. For information on how to set up the cluster infrastructure, see Configuring and Managing a Red Hat Cluster . For an example of creating a mirrored logical volume in a cluster, see Section 5.5, "Creating a Mirrored LVM Logical Volume in a Cluster" .
[ "/usr/sbin/lvmconf --enable-cluster" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/LVM_administration
Chapter 23. Authentication and Interoperability
Chapter 23. Authentication and Interoperability yum no longer reports package conflicts after installing ipa-client After the user installed the ipa-client package, the yum utility unexpectedly reported package conflicts between the ipa and freeipa packages. These errors occurred after failed transactions or after using the yum check command. With this update, yum no longer reports errors about self-conflicting packages because such conflicts are allowed by RPM. As a result, yum no longer displays the described errors after installing ipa-client . (BZ# 1370134 ) In FIPS mode, the slapd_pk11_getInternalKeySlot() function is now used to retrieve the key slot for a token The Red Hat Directory Server previously tried to retrieve the key slot from a fixed token name, when FIPS mode was enabled on the security database. However, the token name can change. If the key slot is not found, Directory Server is unable to decode the replication manager's password and replication sessions fail. To fix the problem, the slapd_pk11_getInternalKeySlot() function now uses FIPS mode to retrieve the current key slot. As a result, replication sessions using SSL or STTARTTLS no longer fail in the described situation. (BZ# 1378209 ) Certificate System no longer fails to install with a Thales HSM on systems in FIPS mode After installing with the Certificate System (CS) with a Thales hardware security module (HSM), the SSL protocol did not work correctly if you generated all system keys on the HSM. Consequently, CS failed to install on systems with FIPS mode enabled, requiring you to manually modify the sslRangeCiphers parameter in the server.xml file. This bug has been fixed, and installation FIPS-enabled systems with Thales HSM works as expected. (BZ#1382066) The dependency list for pkispawn now correctly includes openssl Previously, when the openssl package was not installed, using the pkispawn utility failed with the following error: This problem occured because the openssl package was not included as a runtime dependency of the pki-server package contained within the pki-core package. This bug has been fixed by adding the missing dependency, and pkispawn installations no longer fail due to missing openssl . (BZ#1376488) Error messages from the PKI Server profile framework are now passed through to the client Previously, PKI Server did not pass through certain error messages generated by the profile framework for certificate requests to the client. Consequently, the error messages displayed on the web UI or in the output of the pki command did not describe why a request failed. The code has been fixed and now passes through error messages. Now users can see the reason why an enrollment failed or was rejected. (BZ#1249400) Certificate System does not start a Lightweight CA key replication during installation Previously, Certificate System incorrectly started a Lightweight CA key replication during a two-step installation. As a consequence, the installation failed and an error was displayed. With this update, the two-step installation does not start the Lightweight CA key replication and the installation completes successfully. (BZ# 1378275 ) PKI Server now correctly compares subject DNs during startup Due to a bug in the routine that adds a Lightweight CA entry for the primary CA, PKI Server previously failed to compare subject distinguished names (DN) if it contained attributes using encodings other than UTF8String . As a consequence, every time the primary CA started, an additional Lightweight CA entry was added. PKI Server now compares the subject DNs in canonical form. As a result, PKI server no longer adds additional Lightweight CA entries in the mentioned scenario. (BZ# 1378277 ) KRA installation no longer fails when connecting to an intermediate CA with an incomplete certificate chain Previously, installing a Key Recovery Authority (KRA) subsystem failed with an UNKNOWN_ISSUER error if the KRA attempted to connect to an intermediate CA that had a trusted CA certificate but did not have the root CA certificate. With this update, KRA installation ignores the error and completes successfully. (BZ# 1381084 ) The startTime field in certificate profiles now uses long integer format Previously, Certificate System stored the value in the startTime field of a certificate profile as integer . If you entered a larger number, Certificate System interpreted the value as a negative number. Consequently, the certificate authority issued certificates that contained a start date located in the past. With this update, the input format of the startTime field has been changed to a long integer. As a result, the issued certificates now have a correct start date. (BZ#1385208) Subordinate CA installation no longer fails due with a PKCS#11 token is not logged in error Previously, subordinate Certificate Authority (sub-CA) installation failed due to a bug in the Network Security Services (NSS) library, which generated the SEC_ERROR_TOKEN_NOT_LOGGED_IN error. This update adds a workaround to the installer which allows the installation to proceed. If the error is still displayed, it can now be ignored. (BZ# 1395817 ) The pkispawn script now correctly sets the ECC key sizes Previously, when a user ran the pkispawn script with an Elliptic Curve Cryptography (ECC) key size parameter set to a different value than the default, which is nistp256 , the setting was ignored. Consequently, the created PKI Server instance issued system certificates, which incorrectly used the default ECC key curve. With this update, PKI Server uses the value set in the pkispawn configuration for the ECC key curve name. As a result, the PKI Server instance now uses the ECC key size set when setting up the instance. (BZ#1397200) CA clone installation in FIPS mode no longer fails Previously, installing a CA clone or a Key Recovery Authority (KRA) failed in FIPS mode due to an inconsistency in handling internal NSS token names. With this update, the code that handles the token name has been consolidated to ensure that all token names are handled consistently. T allows the KRA and CA clone installation to complete properly in FIPS mode. (BZ# 1411428 ) PKI Server no longer fails to start when an entryUSN attribute contains a value larger than 32-bit Previously, the *LDAP Profile Monitor" and the Lightweight CA Monitor parsed values in entryUSN attributes as a 32-bit integer. As a consequence, when the attribute contained a value larger than that, a NumberFormatException error was logged and the server failed to start. The problem has been fixed, and the server no longer fails to start in the mentioned scenario. (BZ# 1412681 ) Tomcat now works with IPv6 by default The IPv4 -specific 127.0.0.1 loopback address was previously used in the default server configuration file as the default AJP host name. This caused connections to fail on servers which run in IPv6 -only environments. With this update, the default value is changed to localhost , which works with both IPv4 and IPv6 protocols. Additionally, an upgrade script is available to automatically change the AJP host name on existing server instances. (BZ# 1413136 ) pkispawn no longer generates invalid NSS database passwords Prior to this update, pkispawn generated a random password for the NSS database which in some cases contained a backslash ( \ ) character. This caused problems when NSS established SSL connections, which in turn caused the installation to fail with a ACCESS_SESSION_ESTABLISH_FAILURE error. This update ensures that the randomly generated password can not contain the backslash character and a connection can always be established, allowing the installation to finish successfully. (BZ# 1447762 ) Certificate retrieval no longer fails when adding a user certificate with the --serial option Using the pki user-cert-add command with the --serial parameter previously used an improperly set up SSL connection to the certificate authority (CA), causing certificate retrieval to fail. With this update, the command uses a properly configured SSL connection to the CA, and the operation now completes successfully. (BZ#1246635) CA web interface no longer shows a blank certificate request page if there is only one entry Previously, when the certificate request page in the CA web user interface only contained one entry, it displayed an empty page instead of showing the single entry. This update fixes the web user interface, and the certificate request page now correctly shows entries in all circumstances. (BZ# 1372052 ) Installing PKI Server in a container environment no longer displays a warning Previously, when installing the pki-server RPM package in a container environment, the systemd daemon was reloaded. As a consequence, a warning was displayed. A patch has been applied to reload the daemon only during an RPM upgrade. As a result, the warning is no longer displayed in the mentioned scenario. (BZ# 1282504 ) Re-enrolling a token using a G&D smart card no longer fails Previously, when re-enrolling a token using a Giesecke & Devrient (G&D) smart card, the enrollment of the token could fail in certain situations. The problem has been fixed, and as a result, re-enrolling a token works as expected. (BZ#1404881) PKI Server provides more detailed information about certificate validation errors on startup Previously, PKI Server did not provide sufficient information if a certificate validation error occurred when the server was started. Consequently, troubleshooting the problem was difficult. PKI Server now uses the new Java security services (JSS) API which provides more detailed information about the cause of the error in the mentioned scenario. (BZ# 1330800 ) PKI Server no longer fails to re-initialize the LDAPProfileSubsystem profile Due to a race condition during re-initializing the LDAPProfileSubsystem profile, PKI Server previously could incorrectly reported that the requested profile does not exist. Consequently, requests to use the profile could fail. The problem has been fixed, and requests to use the profile no longer fail. (BZ# 1376226 ) Extracting private keys generated on an HSM no longer fails Previously, when generating asymmetric keys on a Lunasa or Thales hardware security module (HSM) using the new Asymmetric Key Generation REST service on the key recovery agent (KRA), PKI Server set incorrect flags. As a consequence, users were unable to retrieve the generated private keys. The code has been updated to set the correct flags for keys generated on these HSMs. As a result, users can now retrieve private keys in the mentioned scenario. (BZ# 1386303 ) pkispawn no longer generates passwords consisting only of digits Previously, pkispawn could generate a random password for NSS database consisting only digits. Such passwords are not FIPS-compliant. With this update, the installer has been modified to generate FIPS-compliant random passwords which consist of a mix of digits, lowercase letters, uppercase letters, and certain punctuation marks. (BZ# 1400149 ) CA certificates are now imported with correct trust flags Previously, the pki client-cert-import command imported CA certificates with CT,c, trust flags, which was insufficient and inconsistent with other PKI tools. With this update, the command has been fixed and now sets the trust flags for CA certificates to CT,C,C . (BZ# 1458429 ) Generating a symmetric key no longer fails when using the --usage verify option The pki utility checks a list of valid usages for the symmetric key to be generated. Previously, this list was missing the verify usage. As a consequence, using the key-generate --usage verify option returned an error message. The code has been fixed, and now the verify option works as expected. (BZ#1238684) Subsequent PKI installation no longer fails Previously, when installing multiple public key infrastructure (PKI) instances in batch mode, the installation script did not wait until the CA instance was restarted. As a consequence, the installation of subsequent PKI instances could fail. The script has been updated and now waits until the new subsystem is ready to handle requests before it continues. (BZ# 1446364 ) Two-step subordinate CA installation in FIPS mode no longer fails Previously, a bug in subordinate CA installation in FIPS mode caused two-step installations to fail because the installer required the instance to not exist in the second step. This update changes the workflow so that the first step (installation) requires the instance to not exist, and the second step (configuration) requires the instance to exist. Two new options, "--skip-configuration` and --skip-installation , have been added to the pkispawn command to replace the pki_skip_configuration and pki_skip_installation deployment parameters. This allows you to use the same deployment configuration file for both steps without modifications. (BZ#1454450) The audit log no longer records success when a certificate request was rejected or canceled Previously when a certificate request was rejected or canceled, the server generated a CERT_REQUEST_PROCESSED audit log entry with Outcome=Success . This was incorrect because there was no certificate issued for the request. This bug has been fixed, and the CERT_REQUEST_PROCESSED audit log entry for a rejected or canceled request now reads Outcome=Failure . (BZ# 1452250 ) PKI subsystems which failed self tests are now automatically re-enabled on startup Previously, if a PKI subsystem failed to start due to self test failure, it was automatically disabled to prevent it from running in an inconsistent state. The administrator was expected to re-enable the subsystem manually using pki-server subsystem-enable after fixing the problem. However, this was not clearly communicated, potentially causing confusion among administrators who were not always aware of this requirement. To alleviate this problem, all PKI subsystems are now re-enabled automatically on startup by default. If a self-test fails, the subsystem is disabled as before, but it will no longer require manual re-enabling. This behavior is controlled by a new boolean option in the /etc/pki/pki.conf file, PKI_SERVER_AUTO_ENABLE_SUBSYSTEMS . (BZ# 1454471 ) CERT_REQUEST_PROCESSED audit log entries now include certificate serial number instead of encoded data Previously, CERT_REQUEST_PROCESSED audit log entries included Base64-encoded certificate data. For example: This information was not very useful because the certificate data would have to be decoded separately. The code has been changed to include the certificate serial number directly into the log entry, as shown in the following example: (BZ# 1452344 ) Updating the LDAPProfileSubsystem profile now supports removing attributes Previously, when updating the LDAPProfileSubsystem profile on PKI Server, attributes could not be removed. As a result, PKI Server was unable to load the profile or issue certificates after updating the profile in certain situations. A patch has been applied, and now PKI Server clears the existing profile configuration before loading the new configuration. As a result, updates in the LDAPProfileSubsystem profile can now remove configuration attributes. (BZ# 1445088 )
[ "Installation failed: [Errno 2] No such file or directory", "[AuditEvent=CERT_REQUEST_PROCESSED]...[InfoName=certificate][InfoValue=MIIDBD...]", "[AuditEvent=CERT_REQUEST_PROCESSED]...[CertSerialNum=7]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_authentication_and_interoperability
Chapter 17. Using the Red Hat Quay v2 UI
Chapter 17. Using the Red Hat Quay v2 UI Use the following procedures to configure, and use, the Red Hat Quay v2 UI. 17.1. v2 user interface configuration With FEATURE_UI_V2 enabled, you can toggle between the current version of the user interface and the new version of the user interface. Important This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags. When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI. There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes. Procedure In your deployment's config.yaml file, add the FEATURE_UI_V2 parameter and set it to true , for example: --- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true --- Log in to your Red Hat Quay deployment. In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to new UI, and then click Use Beta Environment , for example: 17.1.1. Creating a new organization in the Red Hat Quay v2 UI Prerequisites You have toggled your Red Hat Quay deployment to use the v2 UI. Use the following procedure to create an organization using the Red Hat Quay v2 UI. Procedure Click Organization in the navigation pane. Click Create Organization . Enter an Organization Name , for example, testorg . Click Create . Now, your example organization should populate under the Organizations page. 17.1.2. Deleting an organization using the Red Hat Quay v2 UI Use the following procedure to delete an organization using the Red Hat Quay v2 UI. Procedure On the Organizations page, select the name of the organization you want to delete, for example, testorg . Click the More Actions drop down menu. Click Delete . Note On the Delete page, there is a Search input box. With this box, users can search for specific organizations to ensure that they are properly scheduled for deletion. For example, if a user is deleting 10 organizations and they want to ensure that a specific organization was deleted, they can use the Search input box to confirm said organization is marked for deletion. Confirm that you want to permanently delete the organization by typing confirm in the box. Click Delete . After deletion, you are returned to the Organizations page. Note You can delete more than one organization at a time by selecting multiple organizations, and then clicking More Actions Delete . 17.1.3. Creating a new repository using the Red Hat Quay v2 UI Use the following procedure to create a repository using the Red Hat Quay v2 UI. Procedure Click Repositories on the navigation pane. Click Create Repository . Select a namespace, for example, quayadmin , and then enter a Repository name , for example, testrepo . Click Create . Now, your example repository should populate under the Repositories page. 17.1.4. Deleting a repository using the Red Hat Quay v2 UI Prerequisites You have created a repository. Procedure On the Repositories page of the Red Hat Quay v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 17.1.5. Pushing an image to the Red Hat Quay v2 UI Use the following procedure to push an image to the Red Hat Quay v2 UI. Procedure Pull a sample image from an external registry: USD podman pull busybox Tag the image: USD podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test Push the image to your Red Hat Quay registry: USD podman push quay-server.example.com/quayadmin/busybox:test Navigate to the Repositories page on the Red Hat Quay UI and ensure that your image has been properly pushed. You can check the security details by selecting your image tag, and then navigating to the Security Report page. 17.1.6. Deleting an image using the Red Hat Quay v2 UI Use the following procedure to delete an image using theRed Hat Quay v2 UI. Prerequisites You have pushed an image to your Red Hat Quay registry. Procedure On the Repositories page of the Red Hat Quay v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 17.1.7. Creating a robot account using the Red Hat Quay v2 UI Use the following procedure to create a robot account using the Red Hat Quay v2 UI. Procedure On the Red Hat Quay v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Robot accounts tab Create robot account . In the Provide a name for your robot account box, enter a name, for example, robot1 . Optional. The following options are available if desired: Add the robot to a team. Add the robot to a repository. Adjust the robot's permissions. On the Review and finish page, review the information you have provided, then click Review and finish . Optional. You can click Expand or Collapse to reveal descriptive information about the robot account. Optional. You can change permissions of the robot account by clicking the kebab menu Set repository permissions . Optional. To delete your robot account, check the box of the robot account and click the trash can icon. A popup box appears. Type confirm in the text box, then, click Delete . Alternatively, you can click the kebab menu Delete . 17.1.8. Organization settings for the Red Hat Quay v2 UI Use the following procedure to alter your organization settings using the Red Hat Quay v2 UI. Procedure On the Red Hat Quay v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Settings tab. Optional. Enter the email address associated with the organization. Optional. Set the allotted time for the Time Machine feature to one of the following: 1 week 1 month 1 year Never Click Save . 17.1.9. Viewing image tag information using the Red Hat Quay v2 UI Use the following procedure to view image tag information using the Red Hat Quay v2 UI. Procedure On the Red Hat Quay v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the name of the tag, for example, test . You are taken to the Details page of the tag. The page reveals the following information: Name Repository Digest Vulnerabilities Creation Modified Size Labels How to fetch the image tag Optional. Click Security Report to view the tag's vulnerabilities. You can expand an advisory column to open up CVE data. Optional. Click Packages to view the tag's packages. Click the name of the repository, for example, busybox , to return to the Tags page. Optional. Hover over the Pull icon to reveal the ways to fetch the tag. Check the box of the tag, or multiple tags, click the Actions drop down menu, and then Delete to delete the tag. Confirm deletion by clicking Delete in the popup box. 17.1.10. Adjusting repository settings using the Red Hat Quay v2 UI Use the following procedure to adjust various settings for a repository using the Red Hat Quay v2 UI. Procedure On the Red Hat Quay v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the Settings tab. Optional. Click User and robot permissions . You can adjust the settings for a user or robot account by clicking the dropdown menu option under Permissions . You can change the settings to Read , Write , or Admin . Optional. Click Events and notifications . You can create an event and notification by clicking Create Notification . The following event options are available: Push to Repository Package Vulnerability Found Image build failed Image build queued Image build started Image build success Image build cancelled Then, issue a notification. The following options are available: Email Notification Flowdock Team Notification HipChat Room Notification Slack Notification Webhook POST After selecting an event option and the method of notification, include a Room ID # , a Room Notification Token , then, click Submit . Optional. Click Repository visibility . You can make the repository private, or public, by clicking Make Public . Optional. Click Delete repository . You can delete the repository by clicking Delete Repository . 17.2. Enabling the Red Hat Quay legacy UI In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to Current UI .
[ "--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---", "podman pull busybox", "podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test", "podman push quay-server.example.com/quayadmin/busybox:test" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/using-v2-ui
Chapter 3. ClusterRole [authorization.openshift.io/v1]
Chapter 3. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required rules 3.1. Specification Property Type Description aggregationRule AggregationRule AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.2. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/clusterroles GET : list objects of kind ClusterRole POST : create a ClusterRole /apis/authorization.openshift.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole 3.2.1. /apis/authorization.openshift.io/v1/clusterroles HTTP method GET Description list objects of kind ClusterRole Table 3.1. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ClusterRole schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/authorization.openshift.io/v1/clusterroles/{name} Table 3.5. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. Body parameters Parameter Type Description body ClusterRole schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/role_apis/clusterrole-authorization-openshift-io-v1
Chapter 4. action
Chapter 4. action This chapter describes the commands under the action command. 4.1. action definition create Create new action. Usage: Table 4.1. Positional arguments Value Summary definition Action definition file Table 4.2. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --public With this flag action will be marked as "public". Table 4.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.2. action definition definition show Show action definition. Usage: Table 4.7. Positional arguments Value Summary name Action name Table 4.8. Command arguments Value Summary -h, --help Show this help message and exit 4.3. action definition delete Delete action. Usage: Table 4.9. Positional arguments Value Summary action Name or id of action(s). Table 4.10. Command arguments Value Summary -h, --help Show this help message and exit 4.4. action definition list List all actions. Usage: Table 4.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 4.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.5. action definition show Show specific action. Usage: Table 4.16. Positional arguments Value Summary action Action (name or id) Table 4.17. Command arguments Value Summary -h, --help Show this help message and exit Table 4.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.6. action definition update Update action. Usage: Table 4.22. Positional arguments Value Summary definition Action definition file Table 4.23. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --id ID Action id. --public With this flag action will be marked as "public". Table 4.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.7. action execution delete Delete action execution. Usage: Table 4.28. Positional arguments Value Summary action_execution Id of action execution identifier(s). Table 4.29. Command arguments Value Summary -h, --help Show this help message and exit 4.8. action execution input show Show Action execution input data. Usage: Table 4.30. Positional arguments Value Summary id Action execution id. Table 4.31. Command arguments Value Summary -h, --help Show this help message and exit 4.9. action execution list List all Action executions. Usage: Table 4.32. Positional arguments Value Summary task_execution_id Task execution id. Table 4.33. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest Table 4.34. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.35. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.37. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.10. action execution output show Show Action execution output data. Usage: Table 4.38. Positional arguments Value Summary id Action execution id. Table 4.39. Command arguments Value Summary -h, --help Show this help message and exit 4.11. action execution run Create new Action execution or just run specific action. Usage: Table 4.40. Positional arguments Value Summary name Action name to execute. input Action input. Table 4.41. Command arguments Value Summary -h, --help Show this help message and exit -s, --save-result Save the result into db. --run-sync Run the action synchronously. -t TARGET, --target TARGET Action will be executed on <target> executor. Table 4.42. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.44. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.45. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.12. action execution show Show specific Action execution. Usage: Table 4.46. Positional arguments Value Summary action_execution Action execution id. Table 4.47. Command arguments Value Summary -h, --help Show this help message and exit Table 4.48. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.49. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.50. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.51. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.13. action execution update Update specific Action execution. Usage: Table 4.52. Positional arguments Value Summary id Action execution id. Table 4.53. Command arguments Value Summary -h, --help Show this help message and exit --state {PAUSED,RUNNING,SUCCESS,ERROR,CANCELLED} Action execution state --output OUTPUT Action execution output Table 4.54. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.55. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.56. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.57. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack action definition create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--public] definition", "openstack action definition definition show [-h] name", "openstack action definition delete [-h] action [action ...]", "openstack action definition list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack action definition show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] action", "openstack action definition update [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--id ID] [--public] definition", "openstack action execution delete [-h] action_execution [action_execution ...]", "openstack action execution input show [-h] id", "openstack action execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [task_execution_id]", "openstack action execution output show [-h] id", "openstack action execution run [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-s] [--run-sync] [-t TARGET] name [input]", "openstack action execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] action_execution", "openstack action execution update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--state {PAUSED,RUNNING,SUCCESS,ERROR,CANCELLED}] [--output OUTPUT] id" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/action
Chapter 6. Resolved issues
Chapter 6. Resolved issues There are no resolved issues for this release. For details of any security fixes in this release, see the errata links in Advisories related to this release .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_4_release_notes/resolved_issues
Chapter 17. Scaling clusters by adding or removing brokers
Chapter 17. Scaling clusters by adding or removing brokers Scaling Kafka clusters by adding brokers can increase the performance and reliability of the cluster. Adding more brokers increases available resources, allowing the cluster to handle larger workloads and process more messages. It can also improve fault tolerance by providing more replicas and backups. Conversely, removing underutilized brokers can reduce resource consumption and improve efficiency. Scaling must be done carefully to avoid disruption or data loss. By redistributing partitions across all brokers in the cluster, the resource utilization of each broker is reduced, which can increase the overall throughput of the cluster. Note To increase the throughput of a Kafka topic, you can increase the number of partitions for that topic. This allows the load of the topic to be shared between different brokers in the cluster. However, if every broker is constrained by a specific resource (such as I/O), adding more partitions will not increase the throughput. In this case, you need to add more brokers to the cluster. Adjusting the Kafka.spec.kafka.replicas configuration affects the number of brokers in the cluster that act as replicas. The actual replication factor for topics is determined by settings for the default.replication.factor and min.insync.replicas , and the number of available brokers. For example, a replication factor of 3 means that each partition of a topic is replicated across three brokers, ensuring fault tolerance in the event of a broker failure. Example replica configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 # ... config: # ... default.replication.factor: 3 min.insync.replicas: 2 # ... When adding brokers through the Kafka configuration, node IDs start at 0 (zero) and the Cluster Operator assigns the lowest ID to a new node. The broker removal process starts from the broker pod with the highest ID in the cluster. If you are managing nodes in the cluster using the the preview of the node pools feature, you adjust the KafkaNodePool.spec.replicas configuration to change the number of nodes in the node pool. Additionally, when scaling existing clusters with node pools, you can assign node IDs for the scaling operations . When you add add or remove brokers, Kafka does not automatically reassign partitions. The best way to do this is using Cruise Control. You can use Cruise Control's add-brokers and remove-brokers modes when scaling a cluster up or down. Use the add-brokers mode after scaling up a Kafka cluster to move partition replicas from existing brokers to the newly added brokers. Use the remove-brokers mode before scaling down a Kafka cluster to move partition replicas off the brokers that are going to be removed.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 # config: # default.replication.factor: 3 min.insync.replicas: 2 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/con-scaling-kafka-clusters-str
probe::vm.mmap
probe::vm.mmap Name probe::vm.mmap - Fires when an mmap is requested Synopsis vm.mmap Values name name of the probe point length the length of the memory segment address the requested address Context The process calling mmap.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-mmap