title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 11. How to...
Chapter 11. How to... The following CLI commands and operations provide basic examples on how to accomplish certain tasks. For detailed instructions, see the appropriate section of the Configuration Guide , Configuring Messaging , or other JBoss EAP documentation . Unless specified otherwise, the examples apply when running as a standalone server. Use the --help argument on a command to get usage for that command. Use the read-operation-description to get information on a particular operation for a resource. 11.1. Add a Datasource 11.2. Add an Extension Example: Add a New Extension to a Configuration 11.3. Add a Jakarta Messaging Queue 11.4. Add a Jakarta Messaging Topic 11.5. Add a Module Additional resources For more information see modules and dependencies . Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. For more information, see the Create a Custom Module Manually and Remove a Custom Module Manually sections of the JBoss EAP Configuration Guide . Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. 11.6. Add a Server Example: Add a New Server to a Host in a Managed Domain 11.7. Add a Server Group Example: Add a New Server Group in a Managed Domain 11.8. Add a System Property 11.9. Clone a Profile Example: Clone a Profile in a Managed Domain 11.10. Create a Hierarchical Profile Example: Create a New Profile That Inherits from Other Profiles 11.11. Deploy an Application to a Managed Domain Example: Deploy an Application to All Server Groups Example: Deploy an Application to One or More Server Groups 11.12. Deploy an Application to a Standalone Server 11.13. Disable All Applications You can use the deployment disable-all command to disable all the deployments. 11.14. Display the Active User Example: Command to Display the Current User Example: Output for the Current User 11.15. Display the Contents of an Attachment You can use the attachment display command to display the contents of an attachment returned from a management operation. This applies to any management operation that returns the attached-streams response header. For example, the following operation returns the server.log file attached as a stream. You can use the attachment display command to display the contents of the stream returned from this operation to the console. This outputs the contents of the server.log file to the console. 11.16. Display Schema Information To show the schema information for the :product-info command: To display the schema version, execute an ls command at the management CLI root and look for the management-*-version values: 11.17. Display System and Server Information Example: Command to Display the System and Server Information Example: Output for the System and Server Information Similarly, for a managed domain, you can display the information for a particular JBoss EAP host or server: 11.18. Enable All Disabled Deployments You can use the deployment enable-all command to enable all the deployments. 11.19. Get the Command Timeout Value Example: Display the CLI Command Timeout Value The value returned is in seconds. A value of 0 means no timeout. 11.20. Reload a Host Controller 11.21. Reload a Host Controller in Admin-only Mode 11.22. Reload All Servers in a Server Group Example: Reload All Servers in a Certain Server Group in a Managed Domain Note To reload the servers in a suspended state, pass in the start-mode=suspend argument. 11.23. Reload a Server Example: Reload a Server in a Managed Domain Note To reload the server in a suspended state, pass in the start-mode=suspend argument. 11.24. Reload a Standalone Server Note To reload the server in admin-only mode, pass in the --start-mode=admin-only argument. To reload the server in a suspended state, pass in the --start-mode=suspend argument. 11.25. Remove an Extension Example: Remove an Existing Extension 11.26. Remove a Module Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. For more information, see the Create a Custom Module Manually and Remove a Custom Module Manually sections of the JBoss EAP Configuration Guide . Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. 11.27. Reset the Command Timeout Value Example: Reset the Command Timeout to the Default Value Example: Reset the Command Timeout to the Value Provided by the CLI Configuration Note The value provided by the CLI configuration can be set in either in the EAP_HOME /bin/jboss-cli.xml file or passed in with the --command-timeout argument when starting the management CLI. 11.28. Restart All Servers in a Server Group Example: Restart All Servers in a Certain Server Group in a Managed Domain Note To restart the servers in a suspended state, pass in the start-mode=suspend argument. 11.29. Restart a Server Example: Restart a Server in a Managed Domain Note To restart the server in a suspended state, pass in the start-mode=suspend argument. 11.30. Save the Contents of an Attachment You can use the attachment save command to save the contents of an attachment returned from a management operation to a file. This applies to any management operation that returns the attached-streams response header. For example, the following operation returns the server.log file attached as a stream. You can use the attachment save command to save the contents of the stream returned form this operation to a file. This saves the contents of the server.log file to EAP_HOME /bin/log-output.txt . 11.31. Set the Command Timeout Value Example: Set the Maximum Time to Wait for a CLI Command to Complete The value is set in seconds. A value of 0 means no timeout. 11.32. Shut Down a Host Controller Example: Shut Down a Host Controller in a Managed Domain 11.33. Shut Down the Server Example: Shut Down a Standalone Server 11.34. Start All Servers in a Server Group Example: Start All Servers in a Certain Server Group in a Managed Domain Note To start the servers in a suspended state, pass in the start-mode=suspend argument. 11.35. Start a Server Example: Start a Server in a Managed Domain Note To start the server in a suspended state, pass in the start-mode=suspend argument. 11.36. Stop All Servers in a Server Group Example: Stop All Servers in a Certain Server Group in a Managed Domain 11.37. Stop a Server Example: Stop a Server in a Managed Domain 11.38. Take a Configuration Snapshot Example: Take a Snapshot of the Current Configurations 11.39. Undeploy All Applications Example: Undeploy All Applications from a Managed Domain Example: Undeploy All Applications from a Standalone Domain 11.40. Undeploy an Application from a Managed Domain Example: Undeploy an Application from All Server Groups with That Deployment Example: Undeploy an Application from a Specific Server Group 11.41. Undeploy an Application from a Standalone Server 11.42. Update a Host Name Example: Update the Name of a Host in a Managed Domain The host must be reloaded in order for the changes to take effect. 11.43. Upload an Attachment You can upload a local file as an attachment to management operations that accept file streams. For example, the following management CLI command uses the input-stream-index option to upload the contents of a local file to an exploded deployment. For more details on uploading files to a deployment, see the Add Content to an Exploded Deployment section of the Configuration Guide . 11.44. View a Server Log 11.45. Assigning a generic type command to a specific node You can assign a generic type command to a specific node type by using the --node-type argument. You can use the generic type command to edit properties or call the operations of a specific node type in a management model. The --node-type argument is available both on standalone servers and on servers in a managed domain. After you call a command, the server configuration file is updated. Issue help command in your terminal to view a description of the generic type command and any of its arguments. You can display a description of a command you created by issuing help [COMMAND_NAME] , where [COMMAND_NAME] is the name of your command. The procedure uses examples that demonstrate assigning a generic type command to a specific node type on a standalone server. You can also assign a generic type command to a specific node type in a managed domain. You must add a profile to each command. The following examples sets default as the profile. Procedure Append the --node-type argument to the command command. The following example specifies /subsystem=datasources/data-source as the node type, with data-source specified as the generic type command : Identify the target child node of a specified node type by completing one of the following methods: Specify a read-only property of the child node as an identifying property. The following example calls the flush-all-connection-in-pool operation in the myds resource. This resource is identified by the jndi-name property. Specify a value for the child node with the --name argument. The following example calls the flush-all-connection-in-pool operation in the myds resource. This resource is identified by the --name property. Use the add argument to add a new resource. Added properties are prefixed with -- . The following example shows new-ds as the new resource with --driver-name , --connection-url , and --pool-name properties defined for the resource: You can now edit writable properties by identifying a resource with the --jndi-name command. The following example displays myds as the identified resource, with the writable properties of min-pool-size and max-pool-size modified. 11.46. Assigning a generic type command to a child node You can assign a generic type command to an existing child node by using the --node-child argument. You can use the generic type command to edit properties or call the operations of a child node in the management model. The --node-child argument is available both on standalone servers and on servers in a managed domain. After you call a command, the server configuration file is updated. The procedure uses examples that demonstrate assigning a generic type command to a child node on a standalone server. You can also assign a generic type command to a child node in a managed domain. You must add a profile to each command. The following examples sets default as the profile. Note If you need to add the child node to the management model, you must specify the child node with the generic type command applied to a node type. Procedure Append the --node-child argument to the command command. The following example shows /core-service=management/access=authorization as the child node, with authorization specified as the generic type command : You can now write properties to an existing resource. The following example demonstrates writing properties to existing resources in the authorization child node . Additionally, you can send operations to the existing resource in the authorization child node. The following example sends the read-attribute operation to retrieve a value from the provider property.
[ "data-source add --name= DATASOURCE_NAME --jndi-name= JNDI_NAME --driver-name= DRIVER_NAME --connection-url= CONNECTION_URL", "/extension= EXTENSION_NAME :add", "jms-queue add --queue-address= QUEUE_NAME --entries= JNDI_NAME", "jms-topic add --topic-address= TOPIC_NAME --entries= JNDI_NAME", "module add --name= MODULE_NAME --resources= PATH_TO_RESOURCE --dependencies= DEPENDENCIES", "/host= HOST_NAME /server-config= SERVER_NAME :add(group= SERVER_GROUP_NAME )", "/server-group= SERVER_GROUP_NAME :add(profile= PROFILE_NAME , socket-binding-group= SOCKET_BINDING_GROUP_NAME )", "/system-property= PROPERTY_NAME :add(value= PROPERTY_VALUE )", "/profile= PROFILE_TO_CLONE :clone(to-profile= NEW_PROFILE_NAME )", "/profile= NEW_PROFILE_NAME :add(includes=[ PROFILE_1 , PROFILE_2 ])", "deployment deploy-file /path/to /DEPLOYMENT.war --all-server-groups", "deployment deploy-file /path/to/DEPLOYMENT.war --server-groups= SERVER_GROUP_1 , SERVER_GROUP_2", "deployment deploy-file /path/to/DEPLOYMENT.war", "deployment disable /path/to/DEPLOYMENT.war", "deployment disable-all", ":whoami", "{ \"outcome\" => \"success\", \"result\" => {\"identity\" => { \"username\" => \"USDlocal\", \"realm\" => \"ManagementRealm\" }} }", "/subsystem=logging/log-file=server.log:read-attribute(name=stream) { \"outcome\" => \"success\", \"result\" => \"f61a27c4-c5a7-43ac-af1f-29e90c9acb3e\", \"response-headers\" => {\"attached-streams\" => [{ \"uuid\" => \"f61a27c4-c5a7-43ac-af1f-29e90c9acb3e\", \"mime-type\" => \"text/plain\" }]} }", "attachment display --operation=/subsystem=logging/log-file=server.log:read-attribute(name=stream)", "ATTACHMENT 3480a327-31dd-4412-bdf3-f36c94ac4a09: 2019-10-18 09:19:37,082 INFO [org.jboss.modules] (main) JBoss Modules version 1.8.6.Final-redhat-00001 2019-10-18 09:19:37,366 INFO [org.jboss.msc] (main) JBoss MSC version 1.4.5.Final-redhat-00001 2019-10-18 09:19:37,380 INFO [org.jboss.threads] (main) JBoss Threads version 2.3.2.Final-redhat-1 2019-10-18 09:19:37,510 INFO [org.jboss.as] (MSC service thread 1-1) WFLYSRV0049: JBoss EAP 7.4.0 (WildFly Core 10.0.0.Final-redhat-20190924) starting", ":read-operation-description(name=product-info)", "management-major-version=4 management-micro-version=0 management-minor-version=1", ":product-info", "{ \"outcome\" => \"success\", \"result\" => [{\"summary\" => { \"host-name\" => \"__HOST_NAME__\", \"instance-identifier\" => \"__INSTANCE_ID__\", \"product-name\" => \"JBoss EAP\", \"product-version\" => \"EAP 7.4.0\", \"product-community-identifier\" => \"Product\", \"product-home\" => \"__EAP_HOME__\", \"standalone-or-domain-identifier\" => \"__OPERATING_MODE__\", \"host-operating-system\" => \"__OS_NAME__\", \"host-cpu\" => { \"host-cpu-arch\" => \"__CPU_ARCH__\", \"host-core-count\" => __CORE_COUNT__ }, \"jvm\" => { \"name\" => \"__JAVA_VM_NAME__\", \"java-version\" => \"__JAVA_VERSION__\", \"jvm-version\" => \"__JAVA_VM_VERSION__\", \"jvm-vendor\" => \"__JAVA_VM_VENDOR__\", \"java-home\" => \"__JAVA_HOME__\" } }}] }", "/host= HOST_NAME :product-info", "/host= HOST_NAME /server= SERVER_NAME :product-info", "deployment enable DEPLOYMENT.war", "deployment enable-all --server-groups=other-server-group", "command-timeout get", "reload --host= HOST_NAME", "reload --host= HOST_NAME --admin-only=true", "/server-group= SERVER_GROUP_NAME :reload-servers", "/host= HOST_NAME /server= SERVER_NAME :reload", "reload", "/extension= EXTENSION_NAME :remove", "module remove --name= MODULE_NAME", "command-timeout reset default", "command-timeout reset config", "/server-group= SERVER_GROUP_NAME :restart-servers", "/host= HOST_NAME /server= SERVER_NAME :restart", "/subsystem=logging/log-file=server.log:read-attribute(name=stream) { \"outcome\" => \"success\", \"result\" => \"f61a27c4-c5a7-43ac-af1f-29e90c9acb3e\", \"response-headers\" => {\"attached-streams\" => [{ \"uuid\" => \"f61a27c4-c5a7-43ac-af1f-29e90c9acb3e\", \"mime-type\" => \"text/plain\" }]} }", "attachment save --operation=/subsystem=logging/log-file=server.log:read-attribute(name=stream) --file=log-output.txt", "command-timeout set TIMEOUT_VALUE", "shutdown --host= HOST_NAME", "shutdown", "/server-group= SERVER_GROUP_NAME :start-servers", "/host= HOST_NAME /server= SERVER_NAME :start", "/server-group= SERVER_GROUP_NAME :stop-servers", "/host= HOST_NAME /server= SERVER_NAME :stop", ":take-snapshot", "deployment undeploy * --all-relevant-server-groups", "deployment undeploy *", "deployment undeploy DEPLOYMENT.war --all-relevant-server-groups", "deployment undeploy DEPLOYMENT.war --server-groups= SERVER_GROUP_NAME", "deployment undeploy DEPLOYMENT.war", "/host= EXISTING_HOST_NAME :write-attribute(name=name,value= NEW_HOST_NAME ) reload --host= EXISTING_HOST_NAME", "/deployment= DEPLOYMENT_NAME .war:add-content(content=[{target-path= /path/to/FILE_IN_DEPLOYMENT , input-stream-index= /path/to/LOCAL_FILE_TO_UPLOAD }]", "/subsystem=logging/log-file= SERVER_LOG_NAME :read-log-file", "[domain@localhost:9999 /] data-source --profile=default --jndi-name=myds --min-pool-size=11 --max-pool-size=22", "[standalone@localhost:9999 /] command add --node-type=/subsystem=datasources/data-source --command-name=data-source", "[standalone@localhost:9999 /] data-source flush-all-connection-in-pool --jndi-name=myds", "[standalone@localhost:9999 /] data-source flush-all-connection-in-pool --name=myds", "[standalone@localhost:9999 /] data-source add --jndi-name=my-new-ds --driver-name=h2 \\ --connection-url=db:url --pool-name=my-ds-pool", "[standalone@localhost:9999 /] data-source --jndi-name=myds --min-pool-size=11 --max-pool-size=22", "[domain@localhost:9999 /] authorization --profile=default --provider=rbac --permission-combination-policy=permissive --use-identity-roles=false", "[standalone@localhost:9999 /] command add --node-child=/core-service=management/access=authorization --command-name=authorization", "[standalone@localhost:9999 /] authorization --provider=rbac --permission-combination-policy=permissive --use-identity-roles=false", "[standalone@localhost:9999 /] authorization read-attribute --name=provider" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/management_cli_guide/how_to_cli
Chapter 124. KafkaUserQuotas schema reference
Chapter 124. KafkaUserQuotas schema reference Used in: KafkaUserSpec Full list of KafkaUserQuotas schema properties Configure clients to use quotas so that a user does not overload Kafka brokers. Example Kafka user quota configuration spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10 For more information about Kafka user quotas, refer to the Apache Kafka documentation . 124.1. KafkaUserQuotas schema properties Property Property type Description producerByteRate integer A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. consumerByteRate integer A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. requestPercentage integer A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads. controllerMutationRate number A quota on the rate at which mutations are accepted for the create topics request, the create partitions request and the delete topics request. The rate is accumulated by the number of partitions created or deleted.
[ "spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaUserQuotas-reference
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/monitoring_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
Chapter 141. KafkaBridgeStatus schema reference
Chapter 141. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. url string The URL at which external client applications can access the Kafka Bridge. replicas integer The current number of pods being used to provide this resource. labelSelector string Label selector for pods providing this resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeStatus-reference
Chapter 60. Public key certificates in Identity Management
Chapter 60. Public key certificates in Identity Management X.509 public key certificates are used to authenticate users, hosts and services in Identity Management (IdM). In addition to authentication, X.509 certificates also enable digital signing and encryption to provide privacy, integrity and non-repudiation. A certificate contains the following information: The subject that the certificate authenticates. The issuer, that is the CA that has signed the certificate. The start and end date of the validity of the certificate. The valid uses of the certificate. The public key of the subject. A message encrypted by the public key can only be decrypted by a corresponding private key. While a certificate and the public key it includes can be made publicly available, the user, host or service must keep their private key secret. 60.1. Certificate authorities in IdM Certificate authorities operate in a hierarchy of trust. In an IdM environment with an internal Certificate Authority (CA), all the IdM hosts, users and services trust certificates that have been signed by the CA. Apart from this root CA, IdM supports sub-CAs to which the root CA has granted the ability to sign certificates in their turn. Frequently, the certificates that such sub-CAs are able to sign are certificates of a specific kind, for example VPN certificates. Finally, IdM supports using external CAs. The table below presents the specifics of using the individual types of CA in IdM. Table 60.1. Comparison of using integrated and external CAs in IdM Name of CA Description Use Useful links The ipa CA An integrated CA based on the Dogtag upstream project Integrated CAs can create, revoke, and issue certificates for users, hosts, and services. Using the ipa CA to request a new user certificate and exporting it to the client IdM sub-CAs An integrated CA that is subordinate to the ipa CA IdM sub-CAs are CAs to which the ipa CA has granted the ability to sign certificates. Frequently, these certificates are of a specific kind, for example VPN certificates. Restricting an application to trust only a subset of certificates External CAs An external CA is a CA other than the integrated IdM CA or its sub-CAs. Using IdM tools, you add certificates issued by these CAs to users, services, or hosts as well as remove them. Managing externally signed certificates for IdM users, hosts, and services From the certificate point of view, there is no difference between being signed by a self-signed IdM CA and being signed externally. The role of the CA includes the following purposes: It issues digital certificates. By signing a certificate, it certifies that the subject named in the certificate owns a public key. The subject can be a user, host or service. It can revoke certificates, and provides revocation status via Certificate Revocation Lists (CRLs) and Online Certificate Status Protocol (OCSP). Additional resources Planning your CA services 60.2. Comparison of certificates and Kerberos Certificates perform a similar function to that performed by Kerberos tickets. Kerberos is a computer network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. The following table shows a comparison of Kerberos and X.509 certificates: Table 60.2. Comparison of certificates and Kerberos Characteristic Kerberos X.509 Authentication Yes Yes Privacy Optional Yes Integrity Optional Yes Type of cryptography involved Symmetrical Asymmetrical Default validity Short (1 day) Long(2 years) By default, Kerberos in Identity Management only ensures the identity of the communicating parties. 60.3. The pros and cons of using certificates to authenticate users in IdM The advantages of using certificates to authenticate users in IdM include the following points: A PIN that protects the private key on a smart card is typically less complex and easier to remember than a regular password. Depending on the device, a private key stored on a smart card cannot be exported. This provides additional security. Smart cards can make logout automatic: IdM can be configured to log out users when they remove the smart card from the reader. Stealing the private key requires actual physical access to a smart card, making smart cards secure against hacking attacks. Smart card authentication is an example of two-factor authentication: it requires both something you have (the card) and something you know (the PIN). Smart cards are more flexible than passwords because they provide the keys that can be used for other purposes, such as encrypting email. Using smart cards use on shared machines that are IdM clients does not typically pose additional configuration problems for system administrators. In fact, smart card authentication is an ideal choice for shared machines. The disadvantages of using certificates to authenticate users in IdM include the following points: Users might lose or forget to bring their smart card or certificate and be effectively locked out. Mistyping a PIN multiple times might result in a card becoming locked. There is generally an intermediate step between request and authorization by some sort of security officer or approver. In IdM, the security officer or administrator must run the ipa cert-request command. Smart cards and readers tend to be vendor and driver specific: although a lot of readers can be used for different cards, a smart card of a specific vendor might not work in the reader of another vendor or in the type of a reader for which it was not designed. Certificates and smart cards have a steep learning curve for administrators.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/cert-intro_configuring-and-managing-idm
Chapter 1. IdM integration with Red Hat products
Chapter 1. IdM integration with Red Hat products Find documentation for other Red Hat products that integrate with IdM. You can configure these products to allow your IdM users to access their services. Ansible Automation Platform OpenShift Container Platform Red Hat OpenStack Platform Red Hat Satellite Red Hat Single Sign-On Red Hat Virtualization
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_external_red_hat_utilities_with_identity_management/ref_idm-integration-with-other-red-hat-products_using-external-red-hat-utilities-with-idm
Chapter 5. Setting up your development environment
Chapter 5. Setting up your development environment You can follow the procedures in this section to set up your development environment to create automation execution environments. 5.1. Installing Ansible Builder Prerequisites You have installed the Podman container runtime. You have valid subscriptions attached on the host. Doing so allows you to access the subscription-only resources needed to install ansible-builder , and ensures that the necessary repository for ansible-builder is automatically enabled. See Attaching your Red Hat Ansible Automation Platform subscription for more information. Procedure In your terminal, run the following command to activate your Ansible Automation Platform repo: # dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-builder 5.2. Installing automation content navigator on RHEL from an RPM You can install automation content navigator on Red Hat Enterprise Linux (RHEL) from an RPM. Prerequisites You have installed Python 3.10 or later. You have installed RHEL 8.6 or later. You registered your system with Red Hat Subscription Manager. Note Ensure that you only install the navigator matching your current Red Hat Ansible Automation Platform environment. Procedure Attach the Red Hat Ansible Automation Platform SKU: USD subscription-manager attach --pool=<sku-pool-id> Install automation content navigator with the following command: v.2.4 for RHEL 8 for x86_64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator v.2.4 for RHEL 9 for x86-64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator Verification Verify your automation content navigator installation: USD ansible-navigator --help The following example demonstrates a successful installation: 5.3. Downloading base automation execution environments Base images that ship with Ansible Automation Platform 2.0 are hosted on the Red Hat Ecosystem Catalog (registry.redhat.io). Prerequisites You have a valid Red Hat Ansible Automation Platform subscription. Procedure Log in to registry.redhat.io USD podman login registry.redhat.io Pull the base images from the registry USD podman pull registry.redhat.io/aap/<image name>
[ "dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-builder", "subscription-manager attach --pool=<sku-pool-id>", "sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator", "sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator", "ansible-navigator --help", "podman login registry.redhat.io", "podman pull registry.redhat.io/aap/<image name>" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_creator_guide/setting-up-dev-environment
Chapter 12. PCI Device Assignment
Chapter 12. PCI Device Assignment Red Hat Enterprise Linux 6 exposes three classes of device to its virtual machines: Emulated devices are purely virtual devices that mimic real hardware, allowing unmodified guest operating systems to work with them using their standard in-box drivers. Virtio devices are purely virtual devices designed to work optimally in a virtual machine. Virtio devices are similar to emulated devices, however, non-Linux virtual machines do not include the drivers they require by default. Virtualization management software like the Virtual Machine Manager ( virt-manager ) and the Red Hat Enterprise Virtualization Hypervisor install these drivers automatically for supported non-Linux guest operating systems. Assigned devices are physical devices that are exposed to the virtual machine. This method is also known as 'passthrough'. Device assignment allows virtual machines exclusive access to PCI devices for a range of tasks, and allows PCI devices to appear and behave as if they were physically attached to the guest operating system. Device assignment is supported on PCI Express devices, except graphics cards. Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts. Red Hat Enterprise Linux 6 supports 32 PCI device slots per virtual machine, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 configurable PCI functions per guest. However, this theoretical maximum is subject to the following limitations: Each virtual machine supports a maximum of 8 assigned device functions. 4 PCI device slots are configured with 5 emulated devices (two devices are in slot 1) by default. However, users can explicitly remove 2 of the emulated devices that are configured by default if the guest operating system does not require them for operation (the video adapter device in slot 2; and the memory balloon driver device in the lowest available slot, usually slot 3). This gives users a supported functional maximum of 30 PCI device slots per virtual machine. Red Hat Enterprise Linux 6.0 and newer supports hot plugging assigned PCI devices into virtual machines. However, PCI device hot plugging operates at the slot level and therefore does not support multi-function PCI devices. Multi-function PCI devices are recommended for static device configuration only. Note Red Hat Enterprise Linux 6.0 limited guest operating system driver access to a device's standard and extended configuration space. Limitations that were present in Red Hat Enterprise Linux 6.0 were significantly reduced in Red Hat Enterprise Linux 6.1, and enable a much larger set of PCI Express devices to be successfully assigned to KVM guests. Secure device assignment also requires interrupt remapping support. If a platform does not support interrupt remapping, device assignment will fail. To use device assignment without interrupt remapping support in a development environment, set the allow_unsafe_assigned_interrupts KVM module parameter to 1 . PCI device assignment is only available on hardware platforms supporting either Intel VT-d or AMD IOMMU. These Intel VT-d or AMD IOMMU specifications must be enabled in BIOS for PCI device assignment to function. Procedure 12.1. Preparing an Intel system for PCI device assignment Enable the Intel VT-d specifications The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine. These specifications are required to use PCI device assignment with Red Hat Enterprise Linux. The Intel VT-d specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. The terms used to refer to these specifications can differ between manufacturers; consult your system manufacturer's documentation for the appropriate terms. Activate Intel VT-d in the kernel Activate Intel VT-d in the kernel by adding the intel_iommu=on parameter to the kernel line in the /boot/grub/grub.conf file. The example below is a modified grub.conf file with Intel VT-d activated. Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. Procedure 12.2. Preparing an AMD system for PCI device assignment Enable the AMD IOMMU specifications The AMD IOMMU specifications are required to use PCI device assignment in Red Hat Enterprise Linux. These specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. Enable IOMMU kernel support Append amd_iommu=on to the kernel command line in /boot/grub/grub.conf so that AMD IOMMU specifications are enabled at boot. Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. 12.1. Assigning a PCI Device with virsh These steps cover assigning a PCI device to a virtual machine on a KVM hypervisor. This example uses a PCIe network controller with the PCI identifier code, pci_0000_01_00_0 , and a fully virtualized guest machine named guest1-rhel6-64 . Procedure 12.3. Assigning a PCI device to a guest virtual machine with virsh Identify the device First, identify the PCI device designated for device assignment to the virtual machine. Use the lspci command to list the available PCI devices. You can refine the output of lspci with grep . This example uses the Ethernet controller highlighted in the following output: This Ethernet controller is shown with the short identifier 00:19.0 . We need to find out the full identifier used by virsh in order to assign this PCI device to a virtual machine. To do so, combine the virsh nodedev-list command with the grep command to list all devices of a particular type ( pci ) that are attached to the host machine. Then look at the output for the string that maps to the short identifier of the device you wish to use. This example highlights the string that maps to the Ethernet controller with the short identifier 00:19.0 . Note that the : and . characters are replaced with underscores in the full identifier. Record the PCI device number that maps to the device you want to use; this is required in other steps. Review device information Information on the domain, bus, and function are available from output of the virsh nodedev-dumpxml command: Determine required configuration details Refer to the output from the virsh nodedev-dumpxml pci_0000_00_19_0 command for the values required for the configuration file. Optionally, convert slot and function values to hexadecimal values (from decimal) to get the PCI bus addresses. Append "0x" to the beginning of the output to tell the computer that the value is a hexadecimal number. The example device has the following values: bus = 0, slot = 25 and function = 0. The decimal configuration uses those three values: If you want to convert to hexadecimal values, you can use the printf utility to convert from decimal values, as shown in the following example: The example device would use the following hexadecimal values in the configuration file: Add configuration details Run virsh edit , specifying the virtual machine name, and add a device entry in the <source> section to assign the PCI device to the guest virtual machine. Alternately, run virsh attach-device , specifying the virtual machine name and the guest's XML file: Allow device management Set an SELinux boolean to allow the management of the PCI device from the virtual machine: Start the virtual machine The PCI device should now be successfully assigned to the virtual machine, and accessible to the guest operating system.
[ "default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-330.x86_645) root (hd0,0) kernel /vmlinuz-2.6.32-330.x86_64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet intel_iommu=on initrd /initrd-2.6.32-330.x86_64.img", "lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)", "virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_ 00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0", "virsh nodedev-dumpxml pci_0000_00_19_0 <device> <name>pci_0000_00_19_0</name> <parent>computer</parent> <driver> <name>e1000e</name> </driver> <capability type='pci'> <domain>0</domain> <bus>0</bus> <slot>25</slot> <function>0</function> <product id='0x1502'>82579LM Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <capability type='virt_functions'> </capability> </capability> </device>", "bus='0' slot='25' function='0'", "printf %x 0 0 printf %x 25 19 printf %x 0 0", "bus='0x0' slot='0x19' function='0x0'", "virsh edit guest1-rhel6-64 <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0' bus='0x0' slot='0x19' function='0x0'/> </source> </hostdev>", "virsh attach-device guest1-rhel6-64 file.xml", "setsebool -P virt_use_sysfs 1", "virsh start guest1-rhel6-64" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-PCI_Device_Config
Chapter 4. Converting using the command-line
Chapter 4. Converting using the command-line You can perform the conversion from Alma Linux, CentOS Linux, Oracle Linux, or Rocky Linux to RHEL by using the command-line interface. 4.1. Preparing for a RHEL conversion This procedure describes the steps that are necessary before performing the conversion from Alma Linux, CentOS Linux, Oracle Linux, or Rocky Linux to Red Hat Enterprise Linux (RHEL). Prerequisites You have verified that your system is supported for conversion to RHEL. See Supported conversion paths for more information. You have stopped important applications, database services, and any other services that store data to reduce the risk of data integrity issues. You have temporarily disabled antivirus software to prevent the conversion from failing. You have disabled or adequately reconfigured any configuration management system, such as Salt, Chef, Puppet, Ansible, to not attempt to restore the original system. The sos package is installed. You must use this package to generate an sosreport that is required when opening a support case for the Red Hat Support team. You have created an activation key in Satellite or RHSM. For more information, see Managing activation keys in Satellite documentation and Getting started with activation keys on the Hybrid Cloud Console in RHSM documentation. You have enabled Simple Content Access (SCA). Red Hat accounts created after July 15, 2022 have SCA enabled by default. Procedure Back up your system and verify that it can be restored if needed. Check Known issues and limitations and verify that your system is supported for conversion. Apply workarounds where applicable. If converting from CentOS Linux 8, remove any CentOS Stream packages from your system. CentOS Stream is not currently supported for conversion, and the conversion might fail if any packages are present on the system. If you are converting with a firewall, using Red Hat Satellite, or through a proxy server, ensure that you have access to the following connections: https://cdn.redhat.com https://cdn-public.redhat.com https://subscription.rhsm.redhat.com - required only for systems with firewalls https://*.akamaiedge.net - required only for systems with firewalls https://cert.console.redhat.com If converting from CentOS Linux, update the CentOS repository URLs: Important CentOS Linux 7 and CentOS Linux 8 have reached end of life. For more information, see CentOS Linux EOL . Install Convert2RHEL : Download the Red Hat GPG key: Install the Convert2RHEL repository file. For conversions to RHEL 7, enter the following command: For conversions to RHEL 8, enter the following command: For conversions to RHEL 9, enter the following command: Note You must perform the conversion with the latest version of the Convert2RHEL repository file. If you had previously installed an earlier version of the repository file, remove the earlier version and install the current version. Install the Convert2RHEL utility: Ensure you have access to RHEL packages through one of the following methods: Red Hat Content Delivery Network (CDN) through Red Hat Subscription Manager (RHSM). You must have a Red Hat account and an appropriate RHEL subscription to access RHSM. Note that the OS will be converted to the corresponding minor version of RHEL per Table 1.1. Red Hat Satellite in a version that has Full or Maintenance support. For more information, see Red Hat Satellite Product Life Cycle . Note Ensure that the Satellite server meets the following conditions: Satellite has a subscription manifest with RHEL repositories imported. For more information, see the Managing Red Hat Subscriptions chapter in the Managing Content guide for the particular version of Red Hat Satellite , for example, for version 6.14 . All required repositories are enabled and synchronized with the latest target OS updates and published on Satellite. Enable at minimum the following repositories for the appropriate major version of the OS: Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server Red Hat Enterprise Linux 8 for x86_64 - AppStream RPMs < target_os > Red Hat Enterprise Linux 8 for x86_64 - BaseOS RPMs < target_os > Replace target_os with 8.5 for CentOS Linux conversions and 8.9 , 8.8 , or 8.6 for Alma Linux, Oracle Linux, or Rocky Linux conversions. Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) Custom repositories configured in the /etc/yum.repos.d/ directory and pointing to a mirror of the target OS repositories. Use custom repositories for systems that have access to only local networks or portable media and therefore cannot access Red Hat CDN through RHSM. Make sure that the repositories contain the latest content available for that RHEL minor version to prevent downgrading and potential conversion failures. For more information, see Creating a Local Repository and Sharing With Disconnected/Offline/Air-gapped Systems . Note RHEL 8 and RHEL 9 content is distributed through two default repositories, BaseOS and AppStream. If you are accessing RHEL packages through custom repositories, you must configure both default repositories for a successful conversion. When running the Convert2RHEL utility, make sure to enable both repositories using the --enablerepo option. For more information about these repositories, see Considerations in adopting RHEL 8 and Considerations in adopting RHEL 9 . If you are accessing RHEL packages through a Red Hat Satellite server, register your system with Red Hat Satellite. For more information, see Registering Hosts and Setting Up Host Integration . If you are converting by using RHSM and have not yet registered the system, update the /etc/convert2rhel.ini file to include the following data: Replace organization_id and activation_key with the organization ID and activation key from the Red Hat Customer Portal if you are using Red Hat CDN. Temporarily disable antivirus software to prevent the conversion from failing. If you are accessing RHEL packages by using custom repositories, disable these repositories. The Convert2RHEL utility enables the custom repositories during the conversion process. Update the original OS to the minor version supported for conversion as specified in Table 1.1 and then reboot the system. You must perform the conversion with the latest packages from the minor version of the OS that is supported for conversion to use the rollback feature in case the conversion fails. For more information, see Conversion rollback . 4.2. Reviewing the pre-conversion analysis report To assess whether your systems can be converted to RHEL, run the RHEL pre-conversion analysis. The pre-conversion analysis generates a report that summarizes potential problems and suggests recommended solutions. The report also helps you decide whether it is possible or advisable to proceed with the conversion to RHEL. Always review the entire pre-conversion analysis report, even when the report finds no inhibitors to the conversion. The pre-conversion analysis report contains recommended actions to complete before the conversion to ensure that the converted RHEL system functions correctly. Important The pre-conversion analysis report cannot identify all inhibiting problems with your system. As a result, issues might still occur during the conversion even after you have reviewed and remediated all problems in the report. Prerequisites You have completed the steps listed in Preparing for a RHEL conversion . Procedure On your Alma Linux, CentOS Linux, Oracle Linux, or Rocky Linux system, run the pre-conversion analysis: If you are converting to RHEL 8.8 and have an Extended Upgrade Support (EUS) , add the --eus option. This option ensures that your system receives important security updates delivered to EUS repositories only. If you are converting to RHEL 7 and have an Extended Life Cycle Support (ELS) add-on, add the --els option. It is recommended to purchase an ELS add-on if you plan to stay on RHEL 7 to continue receiving support. The pre-conversion analysis runs a series of tests to determine whether your system can be converted to RHEL. After the analysis is complete, review the status and details of each completed test in the pre-conversion report in the terminal. Non-successful tests contain a description of the issue, a diagnosis of the possible cause of the issue, and, if applicable, a recommended remediation. Each test results in one of the following statuses: Success - The test was successful and there are no issues for this component. Error - The test encountered an issue that would cause the conversion to fail because it is very likely to result in a deteriorated system state. This issue must be resolved before converting. Overridable - The test encountered an issue that would cause the conversion to fail because it is very likely to result in a deteriorated system state. This issue must be either resolved or manually overridden before converting. Warning - The test encountered an issue that might cause system and application issues after the conversion. However, this issue would not cause the conversion to fail. Skip - Could not run this test because of a prerequisite test failing. Could cause the conversion to fail. Info - Informational with no expected impact to the system or applications. For example: After reviewing the report and resolving all reported issues, repeat steps 1-2 to rerun the analysis and confirm that there are no issues outstanding. 4.3. Converting to a RHEL system This procedure describes the steps necessary to convert your system from Alma Linux, CentOS Linux, Oracle Linux, or Rocky Linux to Red Hat Enterprise Linux (RHEL). Procedure Start the Convert2RHEL utility: To display all available options, use the --help ( -h ) option. If you are converting by using custom repositories instead of RHSM, add the --no-rhsm and the --enablerepo <RHEL_RepoID1> --enablerepo <RHEL_RepoID2> options. Replace RHEL_RepoID with your custom repository configured in the /etc/yum.repos.d/ directory, for example, rhel-7-server-rpms or rhel-8-baseos and rhel-8-appstream . If you are converting to RHEL 7, you can manually enable RHEL 7 Extras or Optional repositories by using the --enablerepo option to replace additional packages with their RHEL counterparts. Note that packages in the Optional repository are unsupported. For more information, see Support policy of the optional and supplementary channels in Red Hat Enterprise Linux . If you are converting to RHEL 8.8 and have an Extended Upgrade Support (EUS) , add the --eus option. This option ensures that your system receives important security updates delivered to EUS repositories only. If you are converting to RHEL 7 and have an Extended Life Cycle Support (ELS) add-on, add the --els option. It is recommended to purchase an ELS add-on if you plan to stay on RHEL 7 to continue receiving support. Before Convert2RHEL starts replacing packages from the original distribution with RHEL packages, the following warning message is displayed: Changes made by Convert2RHEL up to this point can be automatically reverted. Confirm that you wish to proceed with the conversion process. Wait until Convert2RHEL installs the RHEL packages and finishes successfully. Recommended: If you used custom repositories for the conversion, register and subscribe your RHEL system. For more information, see How to register and subscribe a system offline to the Red Hat Customer Portal? . At this point, the system still runs with the original distribution kernel loaded in RAM. Reboot the system to boot the newly installed RHEL kernel. Optional: Remove any remaining Convert2RHEL packages, files, and repositories: Remove the Convert2RHEL package: Remove Convert2RHEL files and repositories: Review the list of the third-party packages and remove unnecessary packages from the original OS that remained unchanged. These are typically packages that do not have a RHEL counterpart. To get a list of these packages, use: Replace RHEL_RepoID with your repository. If you have converted a system in Amazon Web Services (AWS) or Microsoft Azure with the Red Hat Enterprise Linux for Third Party Linux Migration with ELS offering, enable host metering on the system. For more information, see Enabling metering for Red Hat Enterprise Linux with Extended Lifecycle Support in your cloud environment . Optional: If you converted to RHEL 7 or RHEL 8, perform an in-place upgrade to RHEL 9 to ensure your system is updated with the latest enhancements, security features, and bug fixes. For more information, see the Upgrading from RHEL 7 to RHEL 8 and Upgrading from RHEL 8 to RHEL 9 guides. Note that if you have converted to RHEL 7, you must first perform the in-place upgrade from RHEL 7 to RHEL 8, and then from RHEL 8 to RHEL 9. Verification Verify that your system operates as expected. If necessary, reconfigure system services after the conversion and fix dependency errors. For more information, see Fixing dependency errors .
[ "sed -i 's/^mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* sed -i 's|#baseurl=http://mirror.centos.org|baseurl=https://vault.centos.org|g' /etc/yum.repos.d/CentOS-*", "curl -o /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release https://security.access.redhat.com/data/fd431d51.txt", "curl -o /etc/yum.repos.d/convert2rhel.repo https://cdn-public.redhat.com/content/public/repofiles/convert2rhel-for-rhel-7-x86_64.repo", "curl -o /etc/yum.repos.d/convert2rhel.repo https://cdn-public.redhat.com/content/public/repofiles/convert2rhel-for-rhel-8-x86_64.repo", "curl -o /etc/yum.repos.d/convert2rhel.repo https://cdn-public.redhat.com/content/public/repofiles/convert2rhel-for-rhel-9-x86_64.repo", "yum -y install convert2rhel", "[subscription_manager] org = <organization_ID> activation_key = <activation_key>", "convert2rhel analyze", "========== Warning (Review and fix if needed) ========== (WARNING) PACKAGE_UPDATES::PACKAGE_NOT_UP_TO_DATE_MESSAGE - Outdated packages detected Description: Please refer to the diagnosis for further information Diagnosis: The system has 4 package(s) not updated based on the enabled system repositories. List of packages to update: openssh-server openssh openssh-clients. Not updating the packages may cause the conversion to fail. Consider updating the packages before proceeding with the conversion. Remediation: [No further information given]", "convert2rhel", "The tool allows rollback of any action until this point. By continuing, all further changes on the system will need to be reverted manually by the user, if necessary.", "reboot", "yum remove -y convert2rhel", "rm -f /etc/convert2rhel.ini.rpmsave rm -f /etc/yum.repos.d/convert2rhel.repo", "yum list extras --disablerepo=\"*\" --enablerepo= <RHEL_RepoID>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility/converting-using-the-command-line_converting-from-a-linux-distribution-to-rhel
8.7. Configuring Snapshot Behavior
8.7. Configuring Snapshot Behavior The configurable parameters for snapshot are: snap-max-hard-limit : If the snapshot count in a volume reaches this limit then no further snapshot creation is allowed. The range is from 1 to 256. Once this limit is reached you have to remove the snapshots to create further snapshots. This limit can be set for the system or per volume. If both system limit and volume limit is configured then the effective max limit would be the lowest of the two value. snap-max-soft-limit : This is a percentage value. The default value is 90%. This configuration works along with auto-delete feature. If auto-delete is enabled then it will delete the oldest snapshot when snapshot count in a volume crosses this limit. When auto-delete is disabled it will not delete any snapshot, but it will display a warning message to the user. auto-delete : This will enable or disable auto-delete feature. By default auto-delete is disabled. When enabled it will delete the oldest snapshot when snapshot count in a volume crosses the snap-max-soft-limit. When disabled it will not delete any snapshot, but it will display a warning message to the user activate-on-create : Snapshots are not activated at creation time by default. If you want created snapshots to immediately be activated after creation, set the activate-on-create parameter to enabled . Note that all volumes are affected by this setting. Displaying the Configuration Values To display the existing configuration values for a volume or the entire cluster, run the following command: where: VOLNAME : This is an optional field. The name of the volume for which the configuration values are to be displayed. If the volume name is not provided then the configuration values of all the volume is displayed. System configuration details are displayed irrespective of whether the volume name is specified or not. For Example: Changing the Configuration Values To change the existing configuration values, run the following command: where: VOLNAME : This is an optional field. The name of the volume for which the configuration values are to be changed. If the volume name is not provided, then running the command will set or change the system limit. snap-max-hard-limit : Maximum hard limit for the system or the specified volume. snap-max-soft-limit : Soft limit mark for the system. auto-delete : This enables or disables the auto-delete feature. By default auto-delete is disabled. activate-on-create : This enables or disables the activate-on-create feature for all volumes. By default activate-on-create is disabled. For Example:
[ "gluster snapshot config [ VOLNAME ]", "gluster snapshot config Snapshot System Configuration: snap-max-hard-limit : 256 snap-max-soft-limit : 90% auto-delete : disable activate-on-create : disable Snapshot Volume Configuration: Volume : test_vol snap-max-hard-limit : 256 Effective snap-max-hard-limit : 256 Effective snap-max-soft-limit : 230 (90%) Volume : test_vol1 snap-max-hard-limit : 256 Effective snap-max-hard-limit : 256 Effective snap-max-soft-limit : 230 (90%)", "gluster snapshot config [ VOLNAME ] ([ snap-max-hard-limit <count>] [ snap-max-soft-limit <percent>]) | ([ auto-delete <enable|disable>]) | ([activate-on-create <enable|disable>])", "gluster snapshot config test_vol snap-max-hard-limit 100 Changing snapshot-max-hard-limit will lead to deletion of snapshots if they exceed the new limit. Do you want to continue? (y/n) y snapshot config: snap-max-hard-limit for test_vol set successfully" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch08s07
Chapter 10. Configuring TLS security profiles
Chapter 10. Configuring TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. Cluster administrators can choose which TLS security profile to use for each of the following components: the Ingress Controller the control plane This includes the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, OpenShift API server, OpenShift OAuth API server, OpenShift OAuth server, and etcd. the kubelet, when it acts as an HTTP server for the Kubernetes API server 10.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 10.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 10.2. Viewing TLS security profile details You can view the minimum TLS version and ciphers for the predefined TLS security profiles for each of the following components: Ingress Controller, control plane, and kubelet. Important The effective configuration of minimum TLS version and list of ciphers for a profile might differ between components. Procedure View details for a specific TLS security profile: USD oc explain <component>.spec.tlsSecurityProfile.<profile> 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For <profile> , specify old , intermediate , or custom . For example, to check the ciphers included for the intermediate profile for the control plane: USD oc explain apiserver.spec.tlsSecurityProfile.intermediate Example output KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 View all details for the tlsSecurityProfile field of a component: USD oc explain <component>.spec.tlsSecurityProfile 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For example, to check all details for the tlsSecurityProfile field for the Ingress Controller: USD oc explain ingresscontroller.spec.tlsSecurityProfile Example output KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: ... FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string> ... 1 Lists ciphers and minimum version for the intermediate profile here. 2 Lists ciphers and minimum version for the modern profile here. 3 Lists ciphers and minimum version for the old profile here. 10.3. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 10.4. Configuring the TLS security profile for the control plane To configure a TLS security profile for the control plane, edit the APIServer custom resource (CR) to specify a predefined or custom TLS security profile. Setting the TLS security profile in the APIServer CR propagates the setting to the following control plane components: Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift OAuth API server OpenShift OAuth server etcd If a TLS security profile is not configured, the default TLS security profile is Intermediate . Note The default TLS security profile for the Ingress Controller is based on the TLS security profile set for the API server. Sample APIServer CR that configures the Old TLS security profile apiVersion: config.openshift.io/v1 kind: APIServer ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers required to communicate with the control plane components. You can see the configured TLS security profile in the APIServer custom resource (CR) under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed. Note The control plane does not support TLS 1.3 as the minimum TLS version; the Modern profile is not supported because it requires TLS 1.3 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the default APIServer CR to configure the TLS security profile: USD oc edit APIServer cluster Add the spec.tlsSecurityProfile field: Sample APIServer CR for a Custom profile apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the TLS security profile is set in the APIServer CR: USD oc describe apiserver cluster Example output Name: cluster Namespace: ... API Version: config.openshift.io/v1 Kind: APIServer ... Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... Verify that the TLS security profile is set in the etcd CR: USD oc describe etcd cluster Example output Name: cluster Namespace: ... API Version: operator.openshift.io/v1 Kind: Etcd ... Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12 ... 10.5. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: config.openshift.io/v1 kind: KubeletConfig ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" #... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #...
[ "oc explain <component>.spec.tlsSecurityProfile.<profile> 1", "oc explain apiserver.spec.tlsSecurityProfile.intermediate", "KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2", "oc explain <component>.spec.tlsSecurityProfile 1", "oc explain ingresscontroller.spec.tlsSecurityProfile", "KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old", "oc edit APIServer cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe apiserver cluster", "Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc describe etcd cluster", "Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12", "apiVersion: config.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/security_and_compliance/tls-security-profiles
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/performance_tuning_for_red_hat_jboss_enterprise_application_platform/open-source-inclusivity-statement_performance-tuning-guide
2.6.2.2.2. Access Control
2.6.2.2.2. Access Control Option fields also allow administrators to explicitly allow or deny hosts in a single rule by adding the allow or deny directive as the final option. For example, the following two rules allow SSH connections from client-1.example.com , but deny connections from client-2.example.com : By allowing access control on a per-rule basis, the option field allows administrators to consolidate all access rules into a single file: either hosts.allow or hosts.deny . Some administrators consider this an easier way of organizing access rules.
[ "sshd : client-1.example.com : allow sshd : client-2.example.com : deny" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-option_fields-access_control
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Cloud The OpenShift Container Platform cluster uses a number of IBM Cloud(R) components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see IBM Cloud(R)'s documentation for Quotas and service limits . Virtual Private Cloud (VPC) Each OpenShift Container Platform cluster creates its own VPC. The default quota of VPCs per region is 10 and will allow 10 clusters. To have more than 10 clusters in a single region, you must increase this quota. Application load balancer By default, each cluster creates three application load balancers (ALBs): Internal load balancer for the master API server External load balancer for the master API server Load balancer for the router You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Cloud(R). Floating IP address By default, the installation program distributes control plane and compute machines across all availability zones within a region to provision the cluster in a highly available configuration. In each availability zone, a public gateway is created and requires a separate floating IP address. The default quota for a floating IP address is 20 addresses per availability zone. The default cluster configuration yields three floating IP addresses: Two floating IP addresses in the us-east-1 primary zone. The IP address associated with the bootstrap node is removed after installation. One floating IP address in the us-east-2 secondary zone. One floating IP address in the us-east-3 secondary zone. IBM Cloud(R) can support up to 19 clusters per region in an account. If you plan to have more than 19 default clusters, you must increase this quota. Virtual Server Instances (VSI) By default, a cluster creates VSIs using bx2-4x16 profiles, which includes the following resources by default: 4 vCPUs 16 GB RAM The following nodes are created: One bx2-4x16 bootstrap machine, which is removed after the installation is complete Three bx2-4x16 control plane nodes Three bx2-4x16 compute nodes For more information, see IBM Cloud(R)'s documentation on supported profiles . Table 2.1. VSI component quotas and limits VSI component Default IBM Cloud(R) quota Default cluster configuration Maximum number of clusters vCPU 200 vCPUs per region 28 vCPUs, or 24 vCPUs after bootstrap removal 8 per region RAM 1600 GB per region 112 GB, or 96 GB after bootstrap removal 16 per region Storage 18 TB per region 1050 GB, or 900 GB after bootstrap removal 19 per region If you plan to exceed the resources stated in the table, you must increase your IBM Cloud(R) account quota. Block Storage Volumes For each VPC machine, a block storage device is attached for its boot volume. The default cluster configuration creates seven VPC machines, resulting in seven block storage volumes. Additional Kubernetes persistent volume claims (PVCs) of the IBM Cloud(R) storage class create additional block storage volumes. The default quota of VPC block storage volumes are 300 per region. To have more than 300 volumes, you must increase this quota. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services) 2.3.1. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_name> 1 1 The instance cloud resource name. Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.3.2. Using IBM Cloud DNS Services for DNS resolution The installation program uses IBM Cloud(R) DNS Services to configure cluster DNS resolution and provide name lookup for a private cluster. You configure DNS resolution by creating a DNS services instance for the cluster, and then adding a DNS zone to the DNS Services instance. Ensure that the zone is authoritative for the domain. You can do this using a root domain or subdomain. Note IBM Cloud(R) does not support IPv6, so dual stack or IPv6 environments are not possible. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a DNS Services instance to use with your cluster: Install the DNS Services plugin by running the following command: USD ibmcloud plugin install cloud-dns-services Create the DNS Services instance by running the following command: USD ibmcloud dns instance-create <instance-name> standard-dns 1 1 At a minimum, you require a Standard DNS plan for DNS Services to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Create a DNS zone for the DNS Services instance: Set the target operating DNS Services instance by running the following command: USD ibmcloud dns instance-target <instance-name> Add the DNS zone to the DNS Services instance by running the following command: USD ibmcloud dns zone-create <zone-name> 1 1 The fully qualified zone name. You can use either the root domain or subdomain value as the zone name, depending on which you plan to configure. A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Record the name of the DNS zone you have created. As part of the installation process, you must update the install-config.yaml file before deploying the cluster. Use the name of the DNS zone as the value for the baseDomain parameter. Note You do not have to manage permitted networks or configure an "A" DNS resource record. As required, the installation program configures these resources automatically. 2.4. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.4.1. Required access policies You must assign the required access policies to your IBM Cloud(R) account. Table 2.2. Required access policies Service type Service Access policy scope Platform access Service access Account management IAM Identity Service All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Service ID creator Account management [2] Identity and Access Management All resources Editor, Operator, Viewer, Administrator Account management Resource group only All resource groups in the account Administrator IAM services Cloud Object Storage All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager, Content Reader, Object Reader, Object Writer IAM services Internet Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services DNS Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services VPC Infrastructure Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager The policy access scope should be set based on how granular you want to assign access. The scope can be set to All resources or Resources based on selected attributes . Optional: This access policy is only required if you want the installation program to create a resource group. For more information about resource groups, see the IBM(R) documentation . 2.4.2. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.4.3. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.5. Supported IBM Cloud regions You can deploy an OpenShift Container Platform cluster to the following regions: au-syd (Sydney, Australia) br-sao (Sao Paulo, Brazil) ca-tor (Toronto, Canada) eu-de (Frankfurt, Germany) eu-gb (London, United Kingdom) eu-es (Madrid, Spain) jp-osa (Osaka, Japan) jp-tok (Tokyo, Japan) us-east (Washington DC, United States) us-south (Dallas, United States) Note Deploying your cluster in the eu-es (Madrid, Spain) region is not supported for OpenShift Container Platform 4.14.6 and earlier versions. 2.6. steps Configuring IAM for IBM Cloud(R)
[ "ibmcloud plugin install cis", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_name> 1", "ibmcloud cis domain-add <domain_name> 1", "ibmcloud plugin install cloud-dns-services", "ibmcloud dns instance-create <instance-name> standard-dns 1", "ibmcloud dns instance-target <instance-name>", "ibmcloud dns zone-create <zone-name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_cloud/installing-ibm-cloud-account
Chapter 4. Managing namespace buckets
Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Data Foundation . Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify that the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. (Optional) Add description. Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resources. If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway Buckets Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in the earlier step that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification steps Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Data Foundation . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace.
[ "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "noobaa account create <noobaa-account-name> [flags]", "noobaa account create testaccount --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore", "NooBaaAccount spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>", "noobaa account list NAME DEFAULT_RESOURCE PHASE AGE testaccount noobaa-default-backing-store Ready 1m17s", "oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001", "oc get ns <application_namespace> -o yaml | grep scc", "oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000", "oc project <application_namespace>", "oc project testnamespace", "oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s", "oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s", "oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}", "oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]", "oc exec -it <pod_name> -- df <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "oc get pv | grep <pv_name>", "oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s", "oc get pv <pv_name> -o yaml", "oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound", "cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF", "oc create -f <YAML_file>", "oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created", "oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s", "oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".", "noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'", "noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'", "oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace", "noobaa account create <user_account> --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'", "noobaa account create leguser --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'", "oc exec -it <pod_name> -- mkdir <mount_path> /nsfs", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs", "noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'", "noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'", "oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "noobaa bucket delete <bucket_name>", "noobaa bucket delete legacy-bucket", "noobaa account delete <user_account>", "noobaa account delete leguser", "noobaa namespacestore delete <nsfs_namespacestore>", "noobaa namespacestore delete legacy-namespace", "oc delete pv <cephfs_pv_name>", "oc delete pvc <cephfs_pvc_name>", "oc delete pv cephfs-pv-legacy-openshift-storage", "oc delete pvc cephfs-pvc-legacy", "oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "oc edit ns <appplication_namespace>", "oc edit ns testnamespace", "oc get ns <application_namespace> -o yaml | grep sa.scc.mcs", "oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF", "oc create -f scc.yaml", "oc create serviceaccount <service_account_name>", "oc create serviceaccount testnamespacesa", "oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>", "oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa", "oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'", "oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'", "oc edit dc <pod_name> -n <application_namespace>", "spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>", "oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace", "spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0", "oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext", "oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/Managing-namespace-buckets_rhodf
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/adding_users_in_red_hat_openshift_api_management/making-open-source-more-inclusive
Chapter 4. API index
Chapter 4. API index API API group AdminNetworkPolicy policy.networking.k8s.io/v1alpha1 AdminPolicyBasedExternalRoute k8s.ovn.org/v1 AlertingRule monitoring.openshift.io/v1 Alertmanager monitoring.coreos.com/v1 AlertmanagerConfig monitoring.coreos.com/v1beta1 AlertRelabelConfig monitoring.openshift.io/v1 APIRequestCount apiserver.openshift.io/v1 APIServer config.openshift.io/v1 APIService apiregistration.k8s.io/v1 AppliedClusterResourceQuota quota.openshift.io/v1 Authentication config.openshift.io/v1 Authentication operator.openshift.io/v1 BareMetalHost metal3.io/v1alpha1 BaselineAdminNetworkPolicy policy.networking.k8s.io/v1alpha1 Binding v1 BMCEventSubscription metal3.io/v1alpha1 BrokerTemplateInstance template.openshift.io/v1 Build build.openshift.io/v1 Build config.openshift.io/v1 BuildConfig build.openshift.io/v1 BuildLog build.openshift.io/v1 BuildRequest build.openshift.io/v1 CatalogSource operators.coreos.com/v1alpha1 CertificateSigningRequest certificates.k8s.io/v1 CloudCredential operator.openshift.io/v1 CloudPrivateIPConfig cloud.network.openshift.io/v1 ClusterAutoscaler autoscaling.openshift.io/v1 ClusterCSIDriver operator.openshift.io/v1 ClusterOperator config.openshift.io/v1 ClusterResourceQuota quota.openshift.io/v1 ClusterRole authorization.openshift.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding authorization.openshift.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ClusterServiceVersion operators.coreos.com/v1alpha1 ClusterVersion config.openshift.io/v1 ComponentStatus v1 Config imageregistry.operator.openshift.io/v1 Config operator.openshift.io/v1 Config samples.operator.openshift.io/v1 ConfigMap v1 Console config.openshift.io/v1 Console operator.openshift.io/v1 ConsoleCLIDownload console.openshift.io/v1 ConsoleExternalLogLink console.openshift.io/v1 ConsoleLink console.openshift.io/v1 ConsoleNotification console.openshift.io/v1 ConsolePlugin console.openshift.io/v1 ConsoleQuickStart console.openshift.io/v1 ConsoleSample console.openshift.io/v1 ConsoleYAMLSample console.openshift.io/v1 ContainerRuntimeConfig machineconfiguration.openshift.io/v1 ControllerConfig machineconfiguration.openshift.io/v1 ControllerRevision apps/v1 ControlPlaneMachineSet machine.openshift.io/v1 CredentialsRequest cloudcredential.openshift.io/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSISnapshotController operator.openshift.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 DataImage metal3.io/v1alpha1 Deployment apps/v1 DeploymentConfig apps.openshift.io/v1 DeploymentConfigRollback apps.openshift.io/v1 DeploymentLog apps.openshift.io/v1 DeploymentRequest apps.openshift.io/v1 DNS config.openshift.io/v1 DNS operator.openshift.io/v1 DNSRecord ingress.operator.openshift.io/v1 EgressFirewall k8s.ovn.org/v1 EgressIP k8s.ovn.org/v1 EgressQoS k8s.ovn.org/v1 EgressRouter network.operator.openshift.io/v1 EgressService k8s.ovn.org/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Etcd operator.openshift.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FeatureGate config.openshift.io/v1 FirmwareSchema metal3.io/v1alpha1 FlowSchema flowcontrol.apiserver.k8s.io/v1 Group user.openshift.io/v1 HardwareData metal3.io/v1alpha1 HelmChartRepository helm.openshift.io/v1beta1 HorizontalPodAutoscaler autoscaling/v2 HostFirmwareComponents metal3.io/v1alpha1 HostFirmwareSettings metal3.io/v1alpha1 Identity user.openshift.io/v1 Image config.openshift.io/v1 Image image.openshift.io/v1 ImageContentPolicy config.openshift.io/v1 ImageContentSourcePolicy operator.openshift.io/v1alpha1 ImageDigestMirrorSet config.openshift.io/v1 ImagePruner imageregistry.operator.openshift.io/v1 ImageSignature image.openshift.io/v1 ImageStream image.openshift.io/v1 ImageStreamImage image.openshift.io/v1 ImageStreamImport image.openshift.io/v1 ImageStreamLayers image.openshift.io/v1 ImageStreamMapping image.openshift.io/v1 ImageStreamTag image.openshift.io/v1 ImageTag image.openshift.io/v1 ImageTagMirrorSet config.openshift.io/v1 Infrastructure config.openshift.io/v1 Ingress config.openshift.io/v1 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 IngressController operator.openshift.io/v1 InsightsOperator operator.openshift.io/v1 InstallPlan operators.coreos.com/v1alpha1 IPAddress ipam.cluster.x-k8s.io/v1beta1 IPAddressClaim ipam.cluster.x-k8s.io/v1beta1 IPPool whereabouts.cni.cncf.io/v1alpha1 Job batch/v1 KubeAPIServer operator.openshift.io/v1 KubeControllerManager operator.openshift.io/v1 KubeletConfig machineconfiguration.openshift.io/v1 KubeScheduler operator.openshift.io/v1 KubeStorageVersionMigrator operator.openshift.io/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalResourceAccessReview authorization.openshift.io/v1 LocalSubjectAccessReview authorization.k8s.io/v1 LocalSubjectAccessReview authorization.openshift.io/v1 Machine machine.openshift.io/v1beta1 MachineAutoscaler autoscaling.openshift.io/v1beta1 MachineConfig machineconfiguration.openshift.io/v1 MachineConfigPool machineconfiguration.openshift.io/v1 MachineConfiguration operator.openshift.io/v1 MachineHealthCheck machine.openshift.io/v1beta1 MachineSet machine.openshift.io/v1beta1 Metal3Remediation infrastructure.cluster.x-k8s.io/v1beta1 Metal3RemediationTemplate infrastructure.cluster.x-k8s.io/v1beta1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 Network config.openshift.io/v1 Network operator.openshift.io/v1 NetworkAttachmentDefinition k8s.cni.cncf.io/v1 NetworkPolicy networking.k8s.io/v1 Node v1 Node config.openshift.io/v1 OAuth config.openshift.io/v1 OAuthAccessToken oauth.openshift.io/v1 OAuthAuthorizeToken oauth.openshift.io/v1 OAuthClient oauth.openshift.io/v1 OAuthClientAuthorization oauth.openshift.io/v1 OLMConfig operators.coreos.com/v1 OpenShiftAPIServer operator.openshift.io/v1 OpenShiftControllerManager operator.openshift.io/v1 Operator operators.coreos.com/v1 OperatorCondition operators.coreos.com/v2 OperatorGroup operators.coreos.com/v1 OperatorHub config.openshift.io/v1 OperatorPKI network.operator.openshift.io/v1 OverlappingRangeIPReservation whereabouts.cni.cncf.io/v1alpha1 PackageManifest packages.operators.coreos.com/v1 PerformanceProfile performance.openshift.io/v2 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodMonitor monitoring.coreos.com/v1 PodNetworkConnectivityCheck controlplane.operator.openshift.io/v1alpha1 PodSecurityPolicyReview security.openshift.io/v1 PodSecurityPolicySelfSubjectReview security.openshift.io/v1 PodSecurityPolicySubjectReview security.openshift.io/v1 PodTemplate v1 PreprovisioningImage metal3.io/v1alpha1 PriorityClass scheduling.k8s.io/v1 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1 Probe monitoring.coreos.com/v1 Profile tuned.openshift.io/v1 Project config.openshift.io/v1 Project project.openshift.io/v1 ProjectHelmChartRepository helm.openshift.io/v1beta1 ProjectRequest project.openshift.io/v1 Prometheus monitoring.coreos.com/v1 PrometheusRule monitoring.coreos.com/v1 Provisioning metal3.io/v1alpha1 Proxy config.openshift.io/v1 RangeAllocation security.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceAccessReview authorization.openshift.io/v1 ResourceQuota v1 Role authorization.openshift.io/v1 Role rbac.authorization.k8s.io/v1 RoleBinding authorization.openshift.io/v1 RoleBinding rbac.authorization.k8s.io/v1 RoleBindingRestriction authorization.openshift.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Scheduler config.openshift.io/v1 Secret v1 SecretList image.openshift.io/v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectReview authentication.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.openshift.io/v1 Service v1 ServiceAccount v1 ServiceCA operator.openshift.io/v1 ServiceMonitor monitoring.coreos.com/v1 StatefulSet apps/v1 Storage operator.openshift.io/v1 StorageClass storage.k8s.io/v1 StorageState migration.k8s.io/v1alpha1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 SubjectAccessReview authorization.openshift.io/v1 SubjectRulesReview authorization.openshift.io/v1 Subscription operators.coreos.com/v1alpha1 Template template.openshift.io/v1 TemplateInstance template.openshift.io/v1 ThanosRuler monitoring.coreos.com/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 Tuned tuned.openshift.io/v1 User user.openshift.io/v1 UserIdentityMapping user.openshift.io/v1 UserOAuthAccessToken oauth.openshift.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/api_overview/api-index
Chapter 50. metric
Chapter 50. metric This chapter describes the commands under the metric command. 50.1. metric aggregates Get measurements of aggregated metrics. Usage: Table 50.1. Positional arguments Value Summary operations Operations to apply to time series search A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. Table 50.2. Command arguments Value Summary -h, --help Show this help message and exit --resource-type RESOURCE_TYPE Resource type to query --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --needed-overlap NEEDED_OVERLAP Percentage of overlap across datapoints --groupby GROUPBY Attribute to use to group resources --fill FILL Value to use when backfilling timestamps with missing values in a subset of series. Value should be a float or null . Table 50.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.2. metric archive-policy create Create an archive policy. Usage: Table 50.7. Positional arguments Value Summary name Name of the archive policy Table 50.8. Command arguments Value Summary -h, --help Show this help message and exit -d <DEFINITION>, --definition <DEFINITION> Two attributes (separated by , ) of an archive policy definition with its name and value separated with a : -b BACK_WINDOW, --back-window BACK_WINDOW Back window of the archive policy -m AGGREGATION_METHODS, --aggregation-method AGGREGATION_METHODS Aggregation method of the archive policy Table 50.9. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.11. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.3. metric archive-policy delete Delete an archive policy. Usage: Table 50.13. Positional arguments Value Summary name Name of the archive policy Table 50.14. Command arguments Value Summary -h, --help Show this help message and exit 50.4. metric archive-policy list List archive policies. Usage: Table 50.15. Command arguments Value Summary -h, --help Show this help message and exit Table 50.16. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.17. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.5. metric archive-policy-rule create Create an archive policy rule. Usage: Table 50.20. Positional arguments Value Summary name Rule name Table 50.21. Command arguments Value Summary -h, --help Show this help message and exit -a ARCHIVE_POLICY_NAME, --archive-policy-name ARCHIVE_POLICY_NAME Archive policy name -m METRIC_PATTERN, --metric-pattern METRIC_PATTERN Wildcard of metric name to match Table 50.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.6. metric archive-policy-rule delete Delete an archive policy rule. Usage: Table 50.26. Positional arguments Value Summary name Name of the archive policy rule Table 50.27. Command arguments Value Summary -h, --help Show this help message and exit 50.7. metric archive-policy-rule list List archive policy rules. Usage: Table 50.28. Command arguments Value Summary -h, --help Show this help message and exit Table 50.29. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.30. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.8. metric archive-policy-rule show Show an archive policy rule. Usage: Table 50.33. Positional arguments Value Summary name Name of the archive policy rule Table 50.34. Command arguments Value Summary -h, --help Show this help message and exit Table 50.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.9. metric archive-policy show Show an archive policy. Usage: Table 50.39. Positional arguments Value Summary name Name of the archive policy Table 50.40. Command arguments Value Summary -h, --help Show this help message and exit Table 50.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.10. metric archive-policy update Update an archive policy. Usage: Table 50.45. Positional arguments Value Summary name Name of the archive policy Table 50.46. Command arguments Value Summary -h, --help Show this help message and exit -d <DEFINITION>, --definition <DEFINITION> Two attributes (separated by , ) of an archive policy definition with its name and value separated with a : Table 50.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.11. metric benchmark measures add Do benchmark testing of adding measurements. Usage: Table 50.51. Positional arguments Value Summary metric Id or name of the metric Table 50.52. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of total measures to send --batch BATCH, -b BATCH Number of measures to send in each batch --timestamp-start TIMESTAMP_START, -s TIMESTAMP_START First timestamp to use --timestamp-end TIMESTAMP_END, -e TIMESTAMP_END Last timestamp to use --wait Wait for all measures to be processed Table 50.53. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.54. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.55. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.56. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.12. metric benchmark measures show Do benchmark testing of measurements show. Usage: Table 50.57. Positional arguments Value Summary metric Id or name of the metric Table 50.58. Command arguments Value Summary -h, --help Show this help message and exit --utc Return timestamps as utc --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --aggregation AGGREGATION Aggregation to retrieve --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --refresh Force aggregation of all known measures --resample RESAMPLE Granularity to resample time-series to (in seconds) --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of total measures to send Table 50.59. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.60. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.61. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.62. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.13. metric benchmark metric create Do benchmark testing of metric creation. Usage: Table 50.63. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --archive-policy-name ARCHIVE_POLICY_NAME, -a ARCHIVE_POLICY_NAME Name of the archive policy --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of metrics to create --keep, -k Keep created metrics Table 50.64. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.65. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.66. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.67. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.14. metric benchmark metric show Do benchmark testing of metric show. Usage: Table 50.68. Positional arguments Value Summary metric Id or name of the metrics Table 50.69. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of metrics to get Table 50.70. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.72. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.73. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.15. metric capabilities list List capabilities. Usage: Table 50.74. Command arguments Value Summary -h, --help Show this help message and exit Table 50.75. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.76. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.77. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.16. metric create Create a metric. Usage: Table 50.79. Positional arguments Value Summary METRIC_NAME Name of the metric Table 50.80. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --archive-policy-name ARCHIVE_POLICY_NAME, -a ARCHIVE_POLICY_NAME Name of the archive policy --unit UNIT, -u UNIT Unit of the metric Table 50.81. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.83. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.84. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.17. metric delete Delete a metric. Usage: Table 50.85. Positional arguments Value Summary metric Ids or names of the metric Table 50.86. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource 50.18. metric list List metrics. Usage: Table 50.87. Command arguments Value Summary -h, --help Show this help message and exit --limit <LIMIT> Number of metrics to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of metric attribute (example: user_id:desc- nullslast Table 50.88. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.89. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.90. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.91. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.19. metric measures add Add measurements to a metric. Usage: Table 50.92. Positional arguments Value Summary metric Id or name of the metric Table 50.93. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource -m MEASURE, --measure MEASURE Timestamp and value of a measure separated with a @ 50.20. metric measures aggregation Get measurements of aggregated metrics. Usage: Table 50.94. Command arguments Value Summary -h, --help Show this help message and exit --utc Return timestamps as utc -m METRIC [METRIC ... ], --metric METRIC [METRIC ... ] Metrics ids or metric name --aggregation AGGREGATION Granularity aggregation function to retrieve --reaggregation REAGGREGATION Groupby aggregation function to retrieve --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --needed-overlap NEEDED_OVERLAP Percent of datapoints in each metrics required --query QUERY A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. --resource-type RESOURCE_TYPE Resource type to query --groupby GROUPBY Attribute to use to group resources --refresh Force aggregation of all known measures --resample RESAMPLE Granularity to resample time-series to (in seconds) --fill FILL Value to use when backfilling timestamps with missing values in a subset of series. Value should be a float or null . Table 50.95. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.96. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.97. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.98. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.21. metric measures batch-metrics Usage: Table 50.99. Positional arguments Value Summary file File containing measurements to batch or - for stdin (see Gnocchi REST API docs for the format Table 50.100. Command arguments Value Summary -h, --help Show this help message and exit 50.22. metric measures batch-resources-metrics Usage: Table 50.101. Positional arguments Value Summary file File containing measurements to batch or - for stdin (see Gnocchi REST API docs for the format Table 50.102. Command arguments Value Summary -h, --help Show this help message and exit --create-metrics Create unknown metrics 50.23. metric measures show Get measurements of a metric. Usage: Table 50.103. Positional arguments Value Summary metric Id or name of the metric Table 50.104. Command arguments Value Summary -h, --help Show this help message and exit --utc Return timestamps as utc --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --aggregation AGGREGATION Aggregation to retrieve --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --refresh Force aggregation of all known measures --resample RESAMPLE Granularity to resample time-series to (in seconds) Table 50.105. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.106. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.108. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.24. metric metric create Deprecated: Create a metric. Usage: Table 50.109. Positional arguments Value Summary METRIC_NAME Name of the metric Table 50.110. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --archive-policy-name ARCHIVE_POLICY_NAME, -a ARCHIVE_POLICY_NAME Name of the archive policy --unit UNIT, -u UNIT Unit of the metric Table 50.111. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.112. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.113. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.114. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.25. metric metric delete Deprecated: Delete a metric. Usage: Table 50.115. Positional arguments Value Summary metric Ids or names of the metric Table 50.116. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource 50.26. metric metric list Deprecated: List metrics. Usage: Table 50.117. Command arguments Value Summary -h, --help Show this help message and exit --limit <LIMIT> Number of metrics to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of metric attribute (example: user_id:desc- nullslast Table 50.118. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.119. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.120. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.121. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.27. metric metric show Deprecated: Show a metric. Usage: Table 50.122. Positional arguments Value Summary metric Id or name of the metric Table 50.123. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource Table 50.124. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.125. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.126. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.127. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.28. metric resource batch delete Delete a batch of resources based on attribute values. Usage: Table 50.128. Positional arguments Value Summary query A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. Table 50.129. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 50.130. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.131. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.132. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.133. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.29. metric resource create Create a resource. Usage: Table 50.134. Positional arguments Value Summary resource_id Id of the resource Table 50.135. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource -a ATTRIBUTE, --attribute ATTRIBUTE Name and value of an attribute separated with a : -m ADD_METRIC, --add-metric ADD_METRIC Name:id of a metric to add -n CREATE_METRIC, --create-metric CREATE_METRIC Name:archive_policy_name of a metric to create Table 50.136. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.137. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.138. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.139. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.30. metric resource delete Delete a resource. Usage: Table 50.140. Positional arguments Value Summary resource_id Id of the resource Table 50.141. Command arguments Value Summary -h, --help Show this help message and exit 50.31. metric resource history Show the history of a resource. Usage: Table 50.142. Positional arguments Value Summary resource_id Id of a resource Table 50.143. Command arguments Value Summary -h, --help Show this help message and exit --details Show all attributes of generic resources --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of resource attribute (example: user_id:desc- nullslast --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 50.144. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.145. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.146. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.147. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.32. metric resource list List resources. Usage: Table 50.148. Command arguments Value Summary -h, --help Show this help message and exit --details Show all attributes of generic resources --history Show history of the resources --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of resource attribute (example: user_id:desc- nullslast --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 50.149. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.150. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.151. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.152. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.33. metric resource search Search resources with specified query rules. Usage: Table 50.153. Positional arguments Value Summary query A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. Table 50.154. Command arguments Value Summary -h, --help Show this help message and exit --details Show all attributes of generic resources --history Show history of the resources --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of resource attribute (example: user_id:desc- nullslast --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 50.155. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.156. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.157. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.158. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.34. metric resource show Show a resource. Usage: Table 50.159. Positional arguments Value Summary resource_id Id of a resource Table 50.160. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 50.161. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.162. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.163. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.164. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.35. metric resource-type create Create a resource type. Usage: Table 50.165. Positional arguments Value Summary name Name of the resource type Table 50.166. Command arguments Value Summary -h, --help Show this help message and exit -a ATTRIBUTE, --attribute ATTRIBUTE Attribute definition, attribute_name:attribute_type:at tribute_is_required:attribute_type_option_name=attribu te_type_option_value:... For example: display_name:string:true:max_length=255 Table 50.167. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.168. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.169. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.170. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.36. metric resource-type delete Delete a resource type. Usage: Table 50.171. Positional arguments Value Summary name Name of the resource type Table 50.172. Command arguments Value Summary -h, --help Show this help message and exit 50.37. metric resource-type list List resource types. Usage: Table 50.173. Command arguments Value Summary -h, --help Show this help message and exit Table 50.174. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.175. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.176. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.177. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.38. metric resource-type show Show a resource type. Usage: Table 50.178. Positional arguments Value Summary name Name of the resource type Table 50.179. Command arguments Value Summary -h, --help Show this help message and exit Table 50.180. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.181. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.182. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.183. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.39. metric resource-type update Create a resource type. Usage: Table 50.184. Positional arguments Value Summary name Name of the resource type Table 50.185. Command arguments Value Summary -h, --help Show this help message and exit -a ATTRIBUTE, --attribute ATTRIBUTE Attribute definition, attribute_name:attribute_type:at tribute_is_required:attribute_type_option_name=attribu te_type_option_value:... For example: display_name:string:true:max_length=255 -r REMOVE_ATTRIBUTE, --remove-attribute REMOVE_ATTRIBUTE Attribute name Table 50.186. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.187. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.188. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.189. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.40. metric resource update Update a resource. Usage: Table 50.190. Positional arguments Value Summary resource_id Id of the resource Table 50.191. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource -a ATTRIBUTE, --attribute ATTRIBUTE Name and value of an attribute separated with a : -m ADD_METRIC, --add-metric ADD_METRIC Name:id of a metric to add -n CREATE_METRIC, --create-metric CREATE_METRIC Name:archive_policy_name of a metric to create -d DELETE_METRIC, --delete-metric DELETE_METRIC Name of a metric to delete Table 50.192. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.193. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.194. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.195. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.41. metric server version Show the version of Gnocchi server. Usage: Table 50.196. Command arguments Value Summary -h, --help Show this help message and exit Table 50.197. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.198. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.199. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.200. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.42. metric show Show a metric. Usage: Table 50.201. Positional arguments Value Summary metric Id or name of the metric Table 50.202. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource Table 50.203. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.204. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.205. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.206. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.43. metric status Show the status of measurements processing. Usage: Table 50.207. Command arguments Value Summary -h, --help Show this help message and exit Table 50.208. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.209. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 50.210. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.211. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack metric aggregates [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--resource-type RESOURCE_TYPE] [--start START] [--stop STOP] [--granularity GRANULARITY] [--needed-overlap NEEDED_OVERLAP] [--groupby GROUPBY] [--fill FILL] operations [search]", "openstack metric archive-policy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] -d <DEFINITION> [-b BACK_WINDOW] [-m AGGREGATION_METHODS] name", "openstack metric archive-policy delete [-h] name", "openstack metric archive-policy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack metric archive-policy-rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] -a ARCHIVE_POLICY_NAME -m METRIC_PATTERN name", "openstack metric archive-policy-rule delete [-h] name", "openstack metric archive-policy-rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack metric archive-policy-rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] name", "openstack metric archive-policy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] name", "openstack metric archive-policy update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] -d <DEFINITION> name", "openstack metric benchmark measures add [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--workers WORKERS] --count COUNT [--batch BATCH] [--timestamp-start TIMESTAMP_START] [--timestamp-end TIMESTAMP_END] [--wait] metric", "openstack metric benchmark measures show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--utc] [--resource-id RESOURCE_ID] [--aggregation AGGREGATION] [--start START] [--stop STOP] [--granularity GRANULARITY] [--refresh] [--resample RESAMPLE] [--workers WORKERS] --count COUNT metric", "openstack metric benchmark metric create [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--archive-policy-name ARCHIVE_POLICY_NAME] [--workers WORKERS] --count COUNT [--keep]", "openstack metric benchmark metric show [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--workers WORKERS] --count COUNT metric [metric ...]", "openstack metric capabilities list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]", "openstack metric create [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--archive-policy-name ARCHIVE_POLICY_NAME] [--unit UNIT] [METRIC_NAME]", "openstack metric delete [-h] [--resource-id RESOURCE_ID] metric [metric ...]", "openstack metric list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>]", "openstack metric measures add [-h] [--resource-id RESOURCE_ID] -m MEASURE metric", "openstack metric measures aggregation [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--utc] -m METRIC [METRIC ...] [--aggregation AGGREGATION] [--reaggregation REAGGREGATION] [--start START] [--stop STOP] [--granularity GRANULARITY] [--needed-overlap NEEDED_OVERLAP] [--query QUERY] [--resource-type RESOURCE_TYPE] [--groupby GROUPBY] [--refresh] [--resample RESAMPLE] [--fill FILL]", "openstack metric measures batch-metrics [-h] file", "openstack metric measures batch-resources-metrics [-h] [--create-metrics] file", "openstack metric measures show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--utc] [--resource-id RESOURCE_ID] [--aggregation AGGREGATION] [--start START] [--stop STOP] [--granularity GRANULARITY] [--refresh] [--resample RESAMPLE] metric", "openstack metric metric create [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--archive-policy-name ARCHIVE_POLICY_NAME] [--unit UNIT] [METRIC_NAME]", "openstack metric metric delete [-h] [--resource-id RESOURCE_ID] metric [metric ...]", "openstack metric metric list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>]", "openstack metric metric show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resource-id RESOURCE_ID] metric", "openstack metric resource batch delete [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] query", "openstack metric resource create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] [-a ATTRIBUTE] [-m ADD_METRIC] [-n CREATE_METRIC] resource_id", "openstack metric resource delete [-h] resource_id", "openstack metric resource history [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--details] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>] [--type RESOURCE_TYPE] resource_id", "openstack metric resource list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--details] [--history] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>] [--type RESOURCE_TYPE]", "openstack metric resource search [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--details] [--history] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>] [--type RESOURCE_TYPE] query", "openstack metric resource show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] resource_id", "openstack metric resource-type create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-a ATTRIBUTE] name", "openstack metric resource-type delete [-h] name", "openstack metric resource-type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack metric resource-type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] name", "openstack metric resource-type update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-a ATTRIBUTE] [-r REMOVE_ATTRIBUTE] name", "openstack metric resource update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] [-a ATTRIBUTE] [-m ADD_METRIC] [-n CREATE_METRIC] [-d DELETE_METRIC] resource_id", "openstack metric server version [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]", "openstack metric show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resource-id RESOURCE_ID] metric", "openstack metric status [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/metric
Chapter 1. Installing an on-premise cluster using the Assisted Installer
Chapter 1. Installing an on-premise cluster using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. 1.1. Using the Assisted Installer The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports the various deployment platforms with a focus on bare metal, Nutanix, and vSphere infrastructures. The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following advantages: Web user interface: The web user interface performs cluster installation without the user having to create the installation configuration files manually. No bootstrap node: A bootstrap node is not required when installing with the Assisted Installer. The bootstrapping process executes on a node within the cluster. Hosting: The Assisted Installer hosts: Ignition files The installation configuration A discovery ISO The installer Streamlined installation workflow: Deployment does not require in-depth knowledge of OpenShift Container Platform. The Assisted Installer provides reasonable defaults and provides the installer as a service, which: Eliminates the need to install and run the OpenShift Container Platform installer locally. Ensures the latest version of the installer up to the latest tested z-stream releases. Older versions remain available, if needed. Enables building automation by using the API without the need to run the OpenShift Container Platform installer locally. Advanced networking: The Assisted Installer supports IPv4 and IPv6 networking, as well as dual-stack networking with the OVN-Kubernetes network plugin, NMState-based static IP addressing, and an HTTP/S proxy. OVN-Kubernetes is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later releases. OpenShift SDN is supported up to OpenShift Container Platform 4.14, but is not supported for OpenShift Container Platform 4.15 and later releases. Preinstallation validation: The Assisted Installer validates the configuration before installation to ensure a high probability of success. The validation process includes the following checks: Ensuring network connectivity Ensuring sufficient network bandwidth Ensuring connectivity to the registry Ensuring time synchronization between cluster nodes Verifying that the cluster nodes meet the minimum hardware requirements Validating the installation configuration parameters REST API: The Assisted Installer has a REST API, enabling automation. The Assisted Installer supports installing OpenShift Container Platform on premises in a connected environment, including with an optional HTTP/S proxy. It can install the following: Highly available OpenShift Container Platform or single-node OpenShift (SNO) OpenShift Container Platform on bare metal, Nutanix, or vSphere with full platform integration, or other virtualization platforms without integration Optional: OpenShift Virtualization, multicluster engine, Logical Volume Manager (LVM) Storage, and OpenShift Data Foundation Note Currently, OpenShift Virtualization and LVM Storage are not supported on IBM Z(R) ( s390x ) architecture. The user interface provides an intuitive interactive workflow where automation does not exist or is not required. Users may also automate installations using the REST API. See the Assisted Installer for OpenShift Container Platform documentation for details. 1.2. API support for the Assisted Installer Supported APIs for the Assisted Installer are stable for a minimum of three months from the announcement of deprecation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on-premise_with_assisted_installer/installing-on-prem-assisted
8.212. sysstat
8.212. sysstat 8.212.1. RHBA-2013:1663 - sysstat bug fix and enhancement update Updated sysstat packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The sysstat packages provide a set of utilities which enable system monitoring of disks, network, and other I/O activity. Bug Fixes BZ#804534 Previously, the sysstat package did not support dynamically attributed major device numbers. Consequently, devices with these numbers were not listed in sar reports under their real names. With this update, support for dynamically attributed major device numbers has been added to sysstat. As a result, all devices now appear with their correct names in sar reports. BZ#967386 A sysstat update changed binary data files in a backward incompatible way, but the version number of these binary data files remained the same. Consequently, using a later sysstat version to read binary data files created by an earlier version of sysstat could have produced invalid results. The version number of sysstat binary data files has been updated, thus fixing this bug. As a result, the current sysstat version will not read binary data files created by versions. For more information, please refer to the description of the "--legacy" option in the sar(1) manual page. BZ#996134 Prior to this update, the umask command was executed too late in the sa1 script. Under certain circumstances, this could have caused incorrect file permissions of newly created files. With this update, executing umask has been moved to the appropriate place in the sa1 script. As a result, newly created files have correct permissions. Enhancements BZ#826399 Kernel device names, such as sda or sdb, might point at different devices every boot. To prevent possible confusion, support for persistent device names has been added to the iostat and sar programs. Persistent names can be enabled with the new "-j" command-line option for both iostat and sar. BZ#838914 The sysstat package has been modified to store the collected statistics longer. The original period of 7 days has been extended to 28 days, thus allowing for better analysis of more complex performance issues. BZ# 850810 With this update, a new "-y" option has been added to the iostat program. This option allows to skip first "since boot" statistics in the report, so there is no longer need to post-process the iostat output in this matter. Users of sysstat are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/sysstat
7.136. mailman
7.136. mailman 7.136.1. RHBA-2012:1474 - mailman bug fix update Updated mailman packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. Mailman is a program used to help manage e-mail discussion lists. Bug Fixes BZ# 772998 The reset_pw.py script contained a typo, which could cause the mailman utility to fail with a traceback. The typo has been corrected, and mailman now works as expected. BZ# 799323 The "urlhost" argument was not handled in the newlist script. When running the "newlist" command with the "--urlhost" argument specified, the contents of the index archive page was not created using proper URLs; the hostname was used instead. With this update, "urlhost" is now handled in the newlist script. If the "--urlhost" argument is specified on the command line, the host URL is used when creating the index archive page instead of the hostname. BZ# 832920 Previously, long lines in e-mails were not wrapped in the web archive, sometimes requiring excessive horizontal scrolling. The "white-space: pre-wrap;" CSS style has been added to all templates, so that long lines are now wrapped in browsers that support that style. BZ# 834023 The "From" string in the e-mail body was not escaped properly. A message containing the "From" string at the beginning of a line was split and displayed in the web archive as two or more messages. The "From" string is now correctly escaped, and messages are no longer split in the described scenario. All users of mailman are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/mailman
4.41. cvs
4.41. cvs 4.41.1. RHSA-2012:0321 - Moderate: cvs security update Updated cvs packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Concurrent Version System (CVS) is a version control system that can record the history of your files. Security Fix CVE-2012-0804 A heap-based buffer overflow flaw was found in the way the CVS client handled responses from HTTP proxies. A malicious HTTP proxy could use this flaw to cause the CVS client to crash or, possibly, execute arbitrary code with the privileges of the user running the CVS client. All users of cvs are advised to upgrade to these updated packages, which contain a patch to correct this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cvs
probe::nfsd.create
probe::nfsd.create Name probe::nfsd.create - NFS server creating a file(regular,dir,device,fifo) for client Synopsis nfsd.create Values fh file handle (the first part is the length of the file handle) iap_valid Attribute flags filelen the length of file name type file type(regular,dir,device,fifo ...) filename file name iap_mode file access mode client_ip the ip address of client Description Sometimes nfsd will call nfsd_create_v3 instead of this this probe point.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-create
Chapter 42. Working with Contexts
Chapter 42. Working with Contexts Abstract JAX-WS uses contexts to pass metadata along the messaging chain. This metadata, depending on its scope, is accessible to implementation level code. It is also accessible to JAX-WS handlers that operate on the message below the implementation level. 42.1. Understanding Contexts Overview In many instances it is necessary to pass information about a message to other parts of an application. Apache CXF does this using a context mechanism. Contexts are maps that hold properties relating to an outgoing or an incoming message. The properties stored in the context are typically metadata about the message, and the underlying transport used to communicate the message. For example, the transport specific headers used in transmitting the message, such as the HTTP response code or the JMS correlation ID, are stored in the JAX-WS contexts. The contexts are available at all levels of a JAX-WS application. However, they differ in subtle ways depending upon where in the message processing stack you are accessing the context. JAX-WS Handler implementations have direct access to the contexts and can access all properties that are set in them. Service implementations access contexts by having them injected, and can only access properties that are set in the APPLICATION scope. Consumer implementations can only access properties that are set in the APPLICATION scope. Figure 42.1, "Message Contexts and Message Processing Path" shows how the context properties pass through Apache CXF. As a message passes through the messaging chain, its associated message context passes along with it. Figure 42.1. Message Contexts and Message Processing Path How properties are stored in a context The message contexts are all implementations of the javax.xml.ws.handler.MessageContext interface. The MessageContext interface extends the java.util.Map<String key, Object value> interface. Map objects store information as key value pairs. In a message context, properties are stored as name/value pairs. A property's key is a String that identifies the property. The value of a property can be any value stored in any Java object. When the value is returned from a message context, the application must know the type to expect and cast accordingly. For example, if a property's value is stored in a UserInfo object it is still returned from a message context as an Object object that must be cast back into a UserInfo object. Properties in a message context also have a scope. The scope determines where a property can be accessed in the message processing chain. Property scopes Properties in a message context are scoped. A property can be in one of the following scopes: APPLICATION Properties scoped as APPLICATION are available to JAX-WS Handler implementations, consumer implementation code, and service provider implementation code. If a handler needs to pass a property to the service provider implementation, it sets the property's scope to APPLICATION . All properties set from either the consumer implementation or the service provider implementation contexts are automatically scoped as APPLICATION . HANDLER Properties scoped as HANDLER are only available to JAX-WS Handler implementations. Properties stored in a message context from a Handler implementation are scoped as HANDLER by default. You can change a property's scope using the message context's setScope() method. Example 42.1, "The MessageContext.setScope() Method" shows the method's signature. Example 42.1. The MessageContext.setScope() Method setScope String key MessageContext.Scope scope java.lang.IllegalArgumentException The first parameter specifies the property's key. The second parameter specifies the new scope for the property. The scope can be either: MessageContext.Scope.APPLICATION MessageContext.Scope.HANDLER Overview of contexts in handlers Classes that implement the JAX-WS Handler interface have direct access to a message's context information. The message's context information is passed into the Handler implementation's handleMessage() , handleFault() , and close() methods. Handler implementations have access to all of the properties stored in the message context, regardless of their scope. In addition, logical handlers use a specialized message context called a LogicalMessageContext . LogicalMessageContext objects have methods that access the contents of the message body. Overview of contexts in service implementations Service implementations can access properties scoped as APPLICATION from the message context. The service provider's implementation object accesses the message context through the WebServiceContext object. For more information see Section 42.2, "Working with Contexts in a Service Implementation" . Overview of contexts in consumer implementations Consumer implementations have indirect access to the contents of the message context. The consumer implementation has two separate message contexts: Request context - holds a copy of the properties used for outgoing requests Response context - holds a copy of the properties from an incoming response The dispatch layer transfers the properties between the consumer implementation's message contexts and the message context used by the Handler implementations. When a request is passed to the dispatch layer from the consumer implementation, the contents of the request context are copied into the message context that is used by the dispatch layer. When the response is returned from the service, the dispatch layer processes the message and sets the appropriate properties into its message context. After the dispatch layer processes a response, it copies all of the properties scoped as APPLICATION in its message context to the consumer implementation's response context. For more information see Section 42.3, "Working with Contexts in a Consumer Implementation" . 42.2. Working with Contexts in a Service Implementation Overview Context information is made available to service implementations using the WebServiceContext interface. From the WebServiceContext object you can obtain a MessageContext object that is populated with the current request's context properties in the application scope. You can manipulate the values of the properties, and they are propagated back through the response chain. Note The MessageContext interface inherits from the java.util.Map interface. Its contents can be manipulated using the Map interface's methods. Obtaining a context To obtain the message context in a service implementation do the following: Declare a variable of type WebServiceContext. Decorate the variable with the javax.annotation.Resource annotation to indicate that the context information is being injected into the variable. Obtain the MessageContext object from the WebServiceContext object using the getMessageContext() method. Important getMessageContext() can only be used in methods that are decorated with the @WebMethod annotation. Example 42.2, "Obtaining a Context Object in a Service Implementation" shows code for obtaining a context object. Example 42.2. Obtaining a Context Object in a Service Implementation Reading a property from a context Once you have obtained the MessageContext object for your implementation, you can access the properties stored there using the get() method shown in Example 42.3, "The MessageContext.get() Method" . Example 42.3. The MessageContext.get() Method V get Object key Note This get() is inherited from the Map interface. The key parameter is the string representing the property you want to retrieve from the context. The get() returns an object that must be cast to the proper type for the property. Table 42.1, "Properties Available in the Service Implementation Context" lists a number of the properties that are available in a service implementation's context. Important Changing the values of the object returned from the context also changes the value of the property in the context. Example 42.4, "Getting a Property from a Service's Message Context" shows code for getting the name of the WSDL operation element that represents the invoked operation. Example 42.4. Getting a Property from a Service's Message Context Setting properties in a context Once you have obtained the MessageContext object for your implementation, you can set properties, and change existing properties, using the put() method shown in Example 42.5, "The MessageContext.put() Method" . Example 42.5. The MessageContext.put() Method V put K key V value ClassCastExceptionIllegalArgumentExceptionNullPointerException If the property being set already exists in the message context, the put() method replaces the existing value with the new value and returns the old value. If the property does not already exist in the message context, the put() method sets the property and returns null . Example 42.6, "Setting a Property in a Service's Message Context" shows code for setting the response code for an HTTP request. Example 42.6. Setting a Property in a Service's Message Context Supported contexts Table 42.1, "Properties Available in the Service Implementation Context" lists the properties accessible through the context in a service implementation object. Table 42.1. Properties Available in the Service Implementation Context Property Name Description org.apache.cxf.message.Message PROTOCOL_HEADERS [a] Specifies the transport specific header information. The value is stored as a java.util.Map<String, List<String>> . RESPONSE_CODE Specifies the response code returned to the consumer. The value is stored as an Integer object. ENDPOINT_ADDRESS Specifies the address of the service provider. The value is stored as a String . HTTP_REQUEST_METHOD Specifies the HTTP verb sent with a request. The value is stored as a String . PATH_INFO Specifies the path of the resource being requested. The value is stored as a String . The path is the portion of the URI after the hostname and before any query string. For example, if an endpoint's URI is http://cxf.apache.org/demo/widgets the path is /demo/widgets . QUERY_STRING Specifies the query, if any, attached to the URI used to invoke the request. The value is stored as a String . Queries appear at the end of the URI after a ? . For example, if a request is made to http://cxf.apache.org/demo/widgets?color the query is color . MTOM_ENABLED Specifies whether or not the service provider can use MTOM for SOAP attachments. The value is stored as a Boolean . SCHEMA_VALIDATION_ENABLED Specifies whether or not the service provider validates messages against a schema. The value is stored as a Boolean . FAULT_STACKTRACE_ENABLED Specifies if the runtime provides a stack trace along with a fault message. The value is stored as a Boolean . CONTENT_TYPE Specifies the MIME type of the message. The value is stored as a String . BASE_PATH Specifies the path of the resource being requested. The value is stored as a java.net.URL . The path is the portion of the URI after the hostname and before any query string. For example, if an endpoint's URL is http://cxf.apache.org/demo/widgets the base path is /demo/widgets . ENCODING Specifies the encoding of the message. The value is stored as a String . FIXED_PARAMETER_ORDER Specifies whether the parameters must appear in the message in a particular order. The value is stored as a Boolean . MAINTAIN_SESSION Specifies if the consumer wants to maintain the current session for future requests. The value is stored as a Boolean . WSDL_DESCRIPTION Specifies the WSDL document that defines the service being implemented. The value is stored as a org.xml.sax.InputSource object. WSDL_SERVICE Specifies the qualified name of the wsdl:service element that defines the service being implemented. The value is stored as a QName . WSDL_PORT Specifies the qualified name of the wsdl:port element that defines the endpoint used to access the service. The value is stored as a QName . WSDL_INTERFACE Specifies the qualified name of the wsdl:portType element that defines the service being implemented. The value is stored as a QName . WSDL_OPERATION Specifies the qualified name of the wsdl:operation element that corresponds to the operation invoked by the consumer. The value is stored as a QName . javax.xml.ws.handler.MessageContext MESSAGE_OUTBOUND_PROPERTY Specifies if a message is outbound. The value is stored as a Boolean . true specifies that a message is outbound. INBOUND_MESSAGE_ATTACHMENTS Contains any attachments included in the request message. The value is stored as a java.util.Map<String, DataHandler> . The key value for the map is the MIME Content-ID for the header. OUTBOUND_MESSAGE_ATTACHMENTS Contains any attachments for the response message. The value is stored as a java.util.Map<String, DataHandler> . The key value for the map is the MIME Content-ID for the header. WSDL_DESCRIPTION Specifies the WSDL document that defines the service being implemented. The value is stored as a org.xml.sax.InputSource object. WSDL_SERVICE Specifies the qualified name of the wsdl:service element that defines the service being implemented. The value is stored as a QName . WSDL_PORT Specifies the qualified name of the wsdl:port element that defines the endpoint used to access the service. The value is stored as a QName . WSDL_INTERFACE Specifies the qualified name of the wsdl:portType element that defines the service being implemented. The value is stored as a QName . WSDL_OPERATION Specifies the qualified name of the wsdl:operation element that corresponds to the operation invoked by the consumer. The value is stored as a QName . HTTP_RESPONSE_CODE Specifies the response code returned to the consumer. The value is stored as an Integer object. HTTP_REQUEST_HEADERS Specifies the HTTP headers on a request. The value is stored as a java.util.Map<String, List<String>> . HTTP_RESPONSE_HEADERS Specifies the HTTP headers for the response. The value is stored as a java.util.Map<String, List<String>> . HTTP_REQUEST_METHOD Specifies the HTTP verb sent with a request. The value is stored as a String . SERVLET_REQUEST Contains the servlet's request object. The value is stored as a javax.servlet.http.HttpServletRequest . SERVLET_RESPONSE Contains the servlet's response object. The value is stored as a javax.servlet.http.HttpResponse . SERVLET_CONTEXT Contains the servlet's context object. The value is stored as a javax.servlet.ServletContext . PATH_INFO Specifies the path of the resource being requested. The value is stored as a String . The path is the portion of the URI after the hostname and before any query string. For example, if an endpoint's URL is http://cxf.apache.org/demo/widgets the path is /demo/widgets . QUERY_STRING Specifies the query, if any, attached to the URI used to invoke the request. The value is stored as a String . Queries appear at the end of the URI after a ? . For example, if a request is made to http://cxf.apache.org/demo/widgets?color the query string is color . REFERENCE_PARAMETERS Specifies the WS-Addressing reference parameters. This includes all of the SOAP headers whose wsa:IsReferenceParameter attribute is set to true . The value is stored as a java.util.List . org.apache.cxf.transport.jms.JMSConstants JMS_SERVER_HEADERS Contains the JMS message headers. For more information see Section 42.4, "Working with JMS Message Properties" . [a] When using HTTP this property is the same as the standard JAX-WS defined property. 42.3. Working with Contexts in a Consumer Implementation Overview Consumer implementations have access to context information through the BindingProvider interface. The BindingProvider instance holds context information in two separate contexts: Request Context The request context enables you to set properties that affect outbound messages. Request context properties are applied to a specific port instance and, once set, the properties affect every subsequent operation invocation made on the port, until such time as a property is explicitly cleared. For example, you might use a request context property to set a connection timeout or to initialize data for sending in a header. Response Context The response context enables you to read the property values set by the response to the last operation invocation made from the current thread. Response context properties are reset after every operation invocation. For example, you might access a response context property to read header information received from the last inbound message. Important Only information that is placed in the application scope of a message context can be accessed by the consumer implementation. Obtaining a context Contexts are obtained using the javax.xml.ws.BindingProvider interface. The BindingProvider interface has two methods for obtaining a context: getRequestContext() The getRequestContext() method, shown in Example 42.7, "The getRequestContext() Method" , returns the request context as a Map object. The returned Map object can be used to directly manipulate the contents of the context. Example 42.7. The getRequestContext() Method Map<String, Object> getRequestContext getResponseContext() The getResponseContext() , shown in Example 42.8, "The getResponseContext() Method" , returns the response context as a Map object. The returned Map object's contents reflect the state of the response context's contents from the most recent successful request on a remote service made in the current thread. Example 42.8. The getResponseContext() Method Map<String, Object> getResponseContext Since proxy objects implement the BindingProvider interface, a BindingProvider object can be obtained by casting a proxy object. The contexts obtained from the BindingProvider object are only valid for operations invoked on the proxy object used to create it. Example 42.9, "Getting a Consumer's Request Context" shows code for obtaining the request context for a proxy. Example 42.9. Getting a Consumer's Request Context Reading a property from a context Consumer contexts are stored in java.util.Map<String, Object> objects. The map has keys that are String objects and values that contain arbitrary objects. Use java.util.Map.get() to access an entry in the map of response context properties. To retrieve a particular context property, ContextPropertyName , use the code shown in Example 42.10, "Reading a Response Context Property" . Example 42.10. Reading a Response Context Property Setting properties in a context Consumer contexts are hash maps stored in java.util.Map<String, Object> objects. The map has keys that are String objects and values that are arbitrary objects. To set a property in a context use the java.util.Map.put() method. While you can set properties in both the request context and the response context, only the changes made to the request context have any impact on message processing. The properties in the response context are reset when each remote invocation is completed on the current thread. The code shown in Example 42.11, "Setting a Request Context Property" changes the address of the target service provider by setting the value of the BindingProvider.ENDPOINT_ADDRESS_PROPERTY. Example 42.11. Setting a Request Context Property Important Once a property is set in the request context its value is used for all subsequent remote invocations. You can change the value and the changed value will then be used. Supported contexts Apache CXF supports the following context properties in consumer implementations: Table 42.2. Consumer Context Properties Property Name Description javax.xml.ws.BindingProvider ENDPOINT_ADDRESS_PROPERTY Specifies the address of the target service. The value is stored as a String . USERNAME_PROPERTY [a] Specifies the username used for HTTP basic authentication. The value is stored as a String . PASSWORD_PROPERTY [b] Specifies the password used for HTTP basic authentication. The value is stored as a String . SESSION_MAINTAIN_PROPERTY [c] Specifies if the client wants to maintain session information. The value is stored as a Boolean object. org.apache.cxf.ws.addressing.JAXWSAConstants CLIENT_ADDRESSING_PROPERTIES Specifies the WS-Addressing information used by the consumer to contact the desired service provider. The value is stored as a org.apache.cxf.ws.addressing.AddressingProperties . org.apache.cxf.transports.jms.context.JMSConstants JMS_CLIENT_REQUEST_HEADERS Contains the JMS headers for the message. For more information see Section 42.4, "Working with JMS Message Properties" . [a] This property is overridden by the username defined in the HTTP security settings. [b] This property is overridden by the password defined in the HTTP security settings. [c] The Apache CXF ignores this property. 42.4. Working with JMS Message Properties Abstract The Apache CXF JMS transport has a context mechanism that can be used to inspect a JMS message's properties. The context mechanism can also be used to set a JMS message's properties. 42.4.1. Inspecting JMS Message Headers Abstract Consumers and services use different context mechanisms to access the JMS message header properties. However, both mechanisms return the header properties as a org.apache.cxf.transports.jms.context.JMSMessageHeadersType object. Getting the JMS Message Headers in a Service To get the JMS message header properties from the WebServiceContext object, do the following: Obtain the context as described in the section called "Obtaining a context" . Get the message headers from the message context using the message context's get() method with the parameter org.apache.cxf.transports.jms.JMSConstants.JMS_SERVER_HEADERS. Example 42.12, "Getting JMS Message Headers in a Service Implementation" shows code for getting the JMS message headers from a service's message context: Example 42.12. Getting JMS Message Headers in a Service Implementation Getting JMS Message Header Properties in a Consumer Once a message is successfully retrieved from the JMS transport you can inspect the JMS header properties using the consumer's response context. In addition, you can set or check the length of time the client will wait for a response before timing out, as described in the section called "Client Receive Timeout" . To get the JMS message headers from a consumer's response context do the following: Get the response context as described in the section called "Obtaining a context" . Get the JMS message header properties from the response context using the context's get() method with org.apache.cxf.transports.jms.JMSConstants.JMS_CLIENT_RESPONSE_HEADERS as the parameter. Example 42.13, "Getting the JMS Headers from a Consumer Response Header" shows code for getting the JMS message header properties from a consumer's response context. Example 42.13. Getting the JMS Headers from a Consumer Response Header The code in Example 42.13, "Getting the JMS Headers from a Consumer Response Header" does the following: Casts the proxy to a BindingProvider. Gets the response context. Retrieves the JMS message headers from the response context. 42.4.2. Inspecting the Message Header Properties Standard JMS Header Properties Table 42.3, "JMS Header Properties" lists the standard properties in the JMS header that you can inspect. Table 42.3. JMS Header Properties Property Name Property Type Getter Method Correlation ID string getJMSCorralationID() Delivery Mode int getJMSDeliveryMode() Message Expiration long getJMSExpiration() Message ID string getJMSMessageID() Priority int getJMSPriority() Redelivered boolean getJMSRedlivered() Time Stamp long getJMSTimeStamp() Type string getJMSType() Time To Live long getTimeToLive() Optional Header Properties In addition, you can inspect any optional properties stored in the JMS header using JMSMessageHeadersType.getProperty() . The optional properties are returned as a List of org.apache.cxf.transports.jms.context.JMSPropertyType . Optional properties are stored as name/value pairs. Example Example 42.14, "Reading the JMS Header Properties" shows code for inspecting some of the JMS properties using the response context. Example 42.14. Reading the JMS Header Properties The code in Example 42.14, "Reading the JMS Header Properties" does the following: Prints the value of the message's correlation ID. Prints the value of the message's priority property. Prints the value of the message's redelivered property. Gets the list of the message's optional header properties. Gets an Iterator to traverse the list of properties. Iterates through the list of optional properties and prints their name and value. 42.4.3. Setting JMS Properties Abstract Using the request context in a consumer endpoint, you can set a number of the JMS message header properties and the consumer endpoint's timeout value. These properties are valid for a single invocation. You must reset them each time you invoke an operation on the service proxy. Note that you cannot set header properties in a service. JMS Header Properties Table 42.4, "Settable JMS Header Properties" lists the properties in the JMS header that can be set using the consumer endpoint's request context. Table 42.4. Settable JMS Header Properties Property Name Property Type Setter Method Correlation ID string setJMSCorralationID() Delivery Mode int setJMSDeliveryMode() Priority int setJMSPriority() Time To Live long setTimeToLive() To set these properties do the following: Create an org.apache.cxf.transports.jms.context.JMSMessageHeadersType object. Populate the values you want to set using the appropriate setter methods described in Table 42.4, "Settable JMS Header Properties" . Set the values to the request context by calling the request context's put() method using org.apache.cxf.transports.jms.JMSConstants.JMS_CLIENT_REQUEST_HEADERS as the first argument, and the new JMSMessageHeadersType object as the second argument. Optional JMS Header Properties You can also set optional properties to the JMS header. Optional JMS header properties are stored in the JMSMessageHeadersType object that is used to set the other JMS header properties. They are stored as a List object containing org.apache.cxf.transports.jms.context.JMSPropertyType objects. To add optional properties to the JMS header do the following: Create a JMSPropertyType object. Set the property's name field using setName() . Set the property's value field using setValue() . Add the property to the JMS message header using JMSMessageHeadersType.getProperty().add(JMSPropertyType) . Repeat the procedure until all of the properties have been added to the message header. Client Receive Timeout In addition to the JMS header properties, you can set the amount of time a consumer endpoint waits for a response before timing out. You set the value by calling the request context's put() method with org.apache.cxf.transports.jms.JMSConstants.JMS_CLIENT_RECEIVE_TIMEOUT as the first argument and a long representing the amount of time in milliseconds that you want the consumer to wait as the second argument. Example Example 42.15, "Setting JMS Properties using the Request Context" shows code for setting some of the JMS properties using the request context. Example 42.15. Setting JMS Properties using the Request Context The code in Example 42.15, "Setting JMS Properties using the Request Context" does the following: Gets the InvocationHandler for the proxy whose JMS properties you want to change. Checks to see if the InvocationHandler is a BindingProvider . Casts the returned InvocationHandler object into a BindingProvider object to retrieve the request context. Gets the request context. Creates a JMSMessageHeadersType object to hold the new message header values. Sets the Correlation ID. Sets the Expiration property to 60 minutes. Creates a new JMSPropertyType object. Sets the values for the optional property. Adds the optional property to the message header. Sets the JMS message header values into the request context. Sets the client receive timeout property to 1 second.
[ "import javax.xml.ws.*; import javax.xml.ws.handler.*; import javax.annotation.*; @WebServiceProvider public class WidgetServiceImpl { @Resource WebServiceContext wsc; @WebMethod public String getColor(String itemNum) { MessageContext context = wsc.getMessageContext(); } }", "import javax.xml.ws.handler.MessageContext; import org.apache.cxf.message.Message; // MessageContext context retrieved in a previous example QName wsdl_operation = (QName)context.get(Message.WSDL_OPERATION);", "import javax.xml.ws.handler.MessageContext; import org.apache.cxf.message.Message; // MessageContext context retrieved in a previous example context.put(Message.RESPONSE_CODE, new Integer(404));", "// Proxy widgetProxy obtained previously BindingProvider bp = (BindingProvider)widgetProxy; Map<String, Object> requestContext = bp.getRequestContext();", "// Invoke an operation. port.SomeOperation(); // Read response context property. java.util.Map<String, Object> responseContext = ((javax.xml.ws.BindingProvider)port).getResponseContext(); PropertyType propValue = ( PropertyType ) responseContext.get( ContextPropertyName );", "// Set request context property. java.util.Map<String, Object> requestContext = ((javax.xml.ws.BindingProvider)port).getRequestContext(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, \"http://localhost:8080/widgets\"); // Invoke an operation. port.SomeOperation();", "import org.apache.cxf.transport.jms.JMSConstants; import org.apache.cxf.transports.jms.context.JMSMessageHeadersType; @WebService(serviceName = \"HelloWorldService\", portName = \"HelloWorldPort\", endpointInterface = \"org.apache.cxf.hello_world_jms.HelloWorldPortType\", targetNamespace = \"http://cxf.apache.org/hello_world_jms\") public class GreeterImplTwoWayJMS implements HelloWorldPortType { @Resource protected WebServiceContext wsContext; @WebMethod public String greetMe(String me) { MessageContext mc = wsContext.getMessageContext(); JMSMessageHeadersType headers = (JMSMessageHeadersType) mc.get(JMSConstants.JMS_SERVER_HEADERS); } }", "import org.apache.cxf.transports.jms.context.*; // Proxy greeter initialized previously BindingProvider bp = (BindingProvider)greeter; Map<String, Object> responseContext = bp.getResponseContext(); JMSMessageHeadersType responseHdr = (JMSMessageHeadersType) responseContext.get(JMSConstants.JMS_CLIENT_RESPONSE_HEADERS); }", "// JMSMessageHeadersType messageHdr retrieved previously System.out.println(\"Correlation ID: \"+messageHdr.getJMSCorrelationID()); System.out.println(\"Message Priority: \"+messageHdr.getJMSPriority()); System.out.println(\"Redelivered: \"+messageHdr.getRedelivered()); JMSPropertyType prop = null; List<JMSPropertyType> optProps = messageHdr.getProperty(); Iterator<JMSPropertyType> iter = optProps.iterator(); while (iter.hasNext()) { prop = iter.next(); System.out.println(\"Property name: \"+prop.getName()); System.out.println(\"Property value: \"+prop.getValue()); }", "import org.apache.cxf.transports.jms.context.*; // Proxy greeter initialized previously InvocationHandler handler = Proxy.getInvocationHandler(greeter); BindingProvider bp= null; if (handler instanceof BindingProvider) { bp = (BindingProvider)handler; Map<String, Object> requestContext = bp.getRequestContext(); JMSMessageHeadersType requestHdr = new JMSMessageHeadersType(); requestHdr.setJMSCorrelationID(\"WithBob\"); requestHdr.setJMSExpiration(3600000L); JMSPropertyType prop = new JMSPropertyType; prop.setName(\"MyProperty\"); prop.setValue(\"Bluebird\"); requestHdr.getProperty().add(prop); requestContext.put(JMSConstants.CLIENT_REQUEST_HEADERS, requestHdr); requestContext.put(JMSConstants.CLIENT_RECEIVE_TIMEOUT, new Long(1000)); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSContexts
Chapter 2. Installation
Chapter 2. Installation This chapter guides you through the steps to install AMQ Spring Boot Starter in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ Spring Boot Starter, you must install Apache Maven . To use AMQ Spring Boot Starter, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.3.6.redhat-00003</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ Spring Boot Starter can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Clients 2.9.0 Spring Boot Starter Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-clients-2.9.0-spring-boot-starter-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" .
[ "<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>", "<dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.3.6.redhat-00003</version> </dependency>", "unzip amq-clients-2.9.0-spring-boot-starter-maven-repository.zip" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/installation
20.16.9.3. Setting a port masquerading range
20.16.9.3. Setting a port masquerading range In cases where you want to set the port masquerading range, the port can be set as follows: <forward mode='nat'> <address start='192.0.2.1' end='192.0.2.10'/> </forward> ... Figure 20.38. Port Masquerading Range These values should be set using the iptables commands as shown in Section 18.3, "Network Address Translation Mode"
[ "<forward mode='nat'> <address start='192.0.2.1' end='192.0.2.10'/> </forward>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-network-interfaces-pmr
6.6. Configuring the Role-Based Credential Map Identity Login Module
6.6. Configuring the Role-Based Credential Map Identity Login Module Warning RoleBasedCredentialMap is now deprecated. Procedure 6.2. Configure Role-Based Credential Map Identity Login Module Create the Login Module Configure authentication modules using the Management Console according to the following specification: Complete the Configuration Configure the data source or connection factory in the same way as for the CallerIdentityLoginModule . Result In the above example, the primary login module UsersRolesLoginModule is configured to login the primary user and assign some roles. The RoleBasedCredentialMap login module is configured to hold role to password information in the file defined by the credentialMap property. When the user logs in, the role information from the primary login module is taken, and the role's password is extracted and attached as a private credential to the Subject. Note To use an encrypted password instead of a plaintext one, include the encrypted password in the file defined by the credentialMap property. For more information about encrypting passwords, refer to the JBoss Enterprise Application Platform Security Guide .
[ "<subsystem xmlns=\"urn:jboss:domain:security:1.1\"> <security-domains> <security-domain name=\"my-security-domain\" cache-type=\"default\"> <authentication> <login-module code=\"UsersRoles\" flag=\"required\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"usersProperties\" value=\"file://USD{jboss.server.config.dir}/teiid-security-users.properties\"/> <module-option name=\"rolesProperties\" value=\"file://USD{jboss.server.config.dir}/teiid-security-roles.properties\"/> </login-module> <login-module code=\"org.teiid.jboss.RoleBasedCredentialMapIdentityLoginModule\" flag=\"required\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"credentialMap\" value=\"file://USD{jboss.server.config.dir}/teiid-credentialmap.properties\"/> </login-module> </authentication> </security-domain> </security-domains> </subsystem>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/configuring_the_role-based_credential_map_identity_login_module
Appendix C. Timezones
Appendix C. Timezones The API maps Windows Standard Format timezone names to tz database format when specifying a timezone for a virtual machine or VM template. This means the API only accepts certain tz database codes, which the following table lists: Table C.1. Accepted tz database codes tz database Format Windows Standard Format Africa/Cairo Egypt Standard Time Africa/Casablanca Morocco Standard Time Africa/Johannesburg South Africa Standard Time Africa/Lagos W. Central Africa Standard Time Africa/Nairobi E. Africa Standard Time Africa/Reykjavik Greenwich Standard Time Africa/Windhoek Namibia Standard Time America/Anchorage Alaskan Standard Time America/Bogota SA Pacific Standard Time America/Buenos_Aires Argentina Standard Time America/Caracas Venezuela Standard Time America/Chicago Central Standard Time America/Chihuahua Mexico Standard Time America/Chihuahua Mountain Standard Time America/Denver Mountain Standard Time America/Godthab Greenland Standard Time America/Guatemala Central America Standard Time America/Halifax Atlantic Standard Time America/La_Paz SA Western Standard Time America/Los_Angeles Pacific Standard Time America/Manaus Central Brazilian Standard Time America/Mexico_City Central Standard Time America/Mexico_City Mexico Standard Time America/Montevideo Montevideo Standard Time America/New_York Eastern Standard Time America/Phoenix US Mountain Standard Time America/Regina Canada Central Standard Time America/Santiago Pacific SA Standard Time America/Sao_Paulo E. South America Standard Time America/St_Johns Newfoundland Standard Time America/Tijuana Pacific Standard Time Asia/Amman Jordan Standard Time Asia/Baghdad Arabic Standard Time Asia/Baku Azerbaijan Standard Time Asia/Bangkok SE Asia Standard Time Asia/Beirut Middle East Standard Time Asia/Calcutta India Standard Time Asia/Colombo Sri Lanka Standard Time Asia/Dhaka Central Asia Standard Time Asia/Dubai Arabian Standard Time Asia/Irkutsk North Asia East Standard Time Asia/Jerusalem Israel Standard Time Asia/Kabul Afghanistan Standard Time Asia/Karachi Pakistan Standard Time Asia/Katmandu Nepal Standard Time Asia/Krasnoyarsk North Asia Standard Time Asia/Novosibirsk N. Central Asia Standard Time Asia/Rangoon Myanmar Standard Time Asia/Riyadh Arab Standard Time Asia/Seoul Korea Standard Time Asia/Shanghai China Standard Time Asia/Singapore Singapore Standard Time Asia/Taipei Taipei Standard Time Asia/Tashkent West Asia Standard Time Asia/Tehran Iran Standard Time Asia/Tokyo Tokyo Standard Time Asia/Vladivostok Vladivostok Standard Time Asia/Yakutsk Yakutsk Standard Time Asia/Yekaterinburg Ekaterinburg Standard Time Asia/Yerevan Armenian Standard Time Asia/Yerevan Caucasus Standard Time Atlantic/Azores Azores Standard Time Atlantic/Cape_Verde Cape Verde Standard Time Atlantic/South_Georgia Mid-Atlantic Standard Time Australia/Adelaide Cen. Australia Standard Time Australia/Brisbane E. Australia Standard Time Australia/Darwin AUS Central Standard Time Australia/Hobart Tasmania Standard Time Australia/Perth W. Australia Standard Time Australia/Sydney AUS Eastern Standard Time Etc/GMT-3 Georgian Standard Time Etc/GMT+12 Dateline Standard Time Etc/GMT+3 SA Eastern Standard Time Etc/GMT+5 US Eastern Standard Time Europe/Berlin W. Europe Standard Time Europe/Budapest Central Europe Standard Time Europe/Istanbul GTB Standard Time Europe/Kiev FLE Standard Time Europe/London GMT Standard Time Europe/Minsk E. Europe Standard Time Europe/Moscow Russian Standard Time Europe/Paris Romance Standard Time Europe/Warsaw Central European Standard Time Indian/Mauritius Mauritius Standard Time Pacific/Apia Samoa Standard Time Pacific/Auckland New Zealand Standard Time Pacific/Fiji Fiji Standard Time Pacific/Guadalcanal Central Pacific Standard Time Pacific/Honolulu Hawaiian Standard Time Pacific/Port_Moresby West Pacific Standard Time Pacific/Tongatapu Tonga Standard Time
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/appe-timezones
Chapter 11. Deploying a Plain JAR
Chapter 11. Deploying a Plain JAR Abstract An alternative method of deploying applications into Apache Karaf is to use plain JAR files. These are usually libraries that contain no deployment metadata . A plain JAR is neither a WAR, nor an OSGi bundle. If the plain JAR occurs as a dependency of a bundle, you must add bundle headers to the JAR. If the JAR exposes a public API, typically the best solution is to convert the existing JAR into a bundle, enabling the JAR to be shared with other bundles. Use the instructions in this chapter to perform the conversion process automatically, using the open source Bnd tool. For more information on the Bnd tool, see Bnd tools website . 11.1. Converting a JAR Using the wrap Scheme Overview You have the option of converting a JAR into a bundle using the wrap: protocol, which can be used with any existing URL format. The wrap: protocol is based on the Bnd utility. Syntax The wrap: protocol has the following basic syntax: The wrap: protocol can prefix any URL that locates a JAR. The locating part of the URL, LocationURL , is used to obtain the plain JAR and the URL handler for the wrap: protocol then converts the JAR automatically into a bundle. Note The wrap: protocol also supports a more elaborate syntax, which enables you to customize the conversion by specifying a Bnd properties file or by specifying individual Bnd properties in the URL. Typically, however, the wrap: protocol is used just with the default settings. Default properties The wrap: protocol is based on the Bnd utility, so it uses exactly the same default properties to generate the bundle as Bnd does. Wrap and install The following example shows how you can use a single console command to download the plain commons-logging JAR from a remote Maven repository, dynamically convert it into an OSGi bundle, and then install it and start it in the OSGi container: Reference The wrap: protocol is provided by the Pax project , which is the umbrella project for a variety of open source OSGi utilities. For full documentation on the wrap: protocol, see the Wrap Protocol reference page.
[ "wrap: LocationURL", "karaf@root> bundle:install -s wrap:mvn:commons-logging/commons-logging/1.1.1" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/deployjar
Chapter 8. Working with clusters
Chapter 8. Working with clusters 8.1. Viewing system event information in an OpenShift Container Platform cluster Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster. 8.1.1. Understanding events Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal To view events in your project from the OpenShift Container Platform console. Launch the OpenShift Container Platform console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of OpenShift Container Platform. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Network events (openshift-sdn) Name Description Starting Starting OpenShift SDN. NetworkFailed The pod's network interface has been lost and the pod will be stopped. Table 8.12. Network events (kube-proxy) Name Description NeedPods The service-port <serviceName>:<port> needs pods. Table 8.13. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.14. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.15. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.16. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.17. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.18. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your OpenShift Container Platform nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Configuring an OpenShift Container Platform cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 8.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Container Platform starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Container Platform uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 8.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 8.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 8.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 8.3.4. Preventing pod removal using critical pods There are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure To make a pod critical: Create a Pod spec or edit existing pods to include the system-cluster-critical priority class: apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1 1 Default priority class for pods that should never be evicted from a node. Alternatively, you can specify system-node-critical for pods that are important to the cluster but can be removed if necessary. Create the pod: USD oc create -f <file-name>.yaml 8.4. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.4.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.4.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.4.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Container Platform calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.4.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.4.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.4.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.4.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.4.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.4.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.4.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.5. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.5.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.5.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Container Platform are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. Additional resources Understanding compute resources and containers 8.5.2. Understanding OpenJDK settings for OpenShift Container Platform The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.5.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.5.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. The OpenShift Container Platform Jenkins maven slave image uses the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.5.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , MAVEN_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the slave image, the OpenShift Container Platform Jenkins maven slave image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.5.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file-name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.5.4. Understanding OOM kill policy OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.5.5. Understanding pod eviction OpenShift Container Platform may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.6. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes. You can configure cluster-level overcommit using the ClusterResourceOverride Operator to override the ratio between requests and limits set on developer containers. In conjunction with node overcommit and project memory and CPU limits and defaults , you can adjust the resource limit and request to achieve the desired level of overcommit. Note In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node . 8.6.1. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted. 8.6.2. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. 8.6.2.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.6.2.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.10" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.6.2.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 8.6.3. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 8.6.3.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 8.6.3.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 8.6.3.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 8.6.3.2. Understanding overcomitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 8.19. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 8.6.3.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 8.6.3.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 8.6.3.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 8.6.3.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 8.6.3.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 8.6.3.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 8.6.4. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 8.6.4.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Edit the namespace object to add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" 1 # ... 1 Setting this annotation to false disables overcommit for this namespace. 8.6.5. Additional resources Setting deployment resources . Allocating resources for nodes . 8.7. Enabling OpenShift Container Platform features using FeatureGates As an administrator, you can use feature gates to enable features that are not part of the default set of features. 8.7.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these tech preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: Microsoft Azure File CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage. CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for: Amazon Web Services (AWS) Elastic Block Storage (EBS) OpenStack Cinder Azure Disk Azure File Google Cloud Platform Persistent Disk (CSI) VMware vSphere Cluster Cloud Controller Manager Operator. Enables the Cluster Cloud Controller Manager Operator rather than the in-tree cloud controller. Available as a Technology Preview for: Alibaba Cloud Amazon Web Services (AWS) Google Cloud Platform (GCP) IBM Cloud Microsoft Azure Red Hat OpenStack Platform (RHOSP) VMware vSphere Shared resource CSI driver CSI volume support for the OpenShift Container Platform build system Swap memory on nodes Additional resources For more information on the features activated by the TechPreviewNoUpgrade feature gate, see the following topics: Azure File CSI Driver Operator CSI automatic migration Cluster Cloud Controller Manager Operator Source-to-image (S2I) build volumes and Docker build volumes Swap memory on nodes 8.7.2. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.3. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version.
[ "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi", "oc create -f <file_name>.yaml", "oc create -f pod-spec.yaml", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]", "oc create -f <file_name>.yaml", "oc create sa cluster-capacity-sa", "oc create sa cluster-capacity-sa -n default", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi", "oc create -f <file_name>.yaml", "oc create -f pod.yaml", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }", "oc create -f <file_or_dir_path>", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1", "oc create -f <file-name>.yaml", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi", "oc create -f <file-name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.10\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/nodes/working-with-clusters
Chapter 1. Observability service
Chapter 1. Observability service Observability can help you identify and assess performance problems without additional tests and support. The Red Hat Advanced Cluster Management for Kubernetes observability component is a service you can use to understand the health and utilization of clusters, and workloads across your fleet. By using the observability service, you are able to automate and manage the components that are within observability. Observability service uses existing and widely-adopted observability tools from the open source community. By default, multicluster observability operator is enabled during the installation of Red Hat Advanced Cluster Management. Thanos is deployed within the hub cluster for long-term metrics storage. The observability-endpoint-operator is automatically deployed to each imported or created managed cluster. This controller starts a metrics collector that collects the data from Red Hat OpenShift Container Platform Prometheus, then sends the data to the Red Hat Advanced Cluster Management hub cluster. Read the following documentation for more details about the observability component: Observability architecture Observability configuration Enabling the observability service Using observability Customizing observability Using observability Managing alerts Searching in the console Using observability with Red Hat Insights 1.1. Observability architecture The multiclusterhub-operator enables the multicluster-observability-operator pod by default. You must configure the multicluster-observability-operator pod. Observability open source components Observability architecture diagram Persistent stores used in the observability service 1.1.1. Observability open source components Observability service uses open source observability tools from community. View the following descriptions of the tools that are apart of the product observability service: Thanos: A toolkit of components that you can use to perform global querying across multiple Prometheus instances. For long-term storage of Prometheus data, persist it in any S3 compatible storage. You can also compose a highly-available and scalable metrics system. Prometheus: A monitoring and alerting tool that you can use to collect metrics from your application and store these metrics as time-series data. Store all scraped samples locally, run rules to aggregate and record new time series from existing data, and generate alerts. Alertmanager: A tool to manage and receive alerts from Prometheus. Deduplicate, group, and route alerts to your integrations such as email, Slack, and PagerDuty. Configure Alertmanager to silence and inhibit specific alerts. 1.1.2. Observability architecture diagram The following diagram shows the components of observability: The components of the observability architecture include the following items: The multicluster hub operator, also known as the multiclusterhub-operator pod, deploys the multicluster-observability-operator pod. It sends hub cluster data to your managed clusters. The observability add-on controller is the API server that automatically updates the log of the managed cluster. The Thanos infrastructure includes the Thanos Compactor, which is deployed by the multicluster-observability-operator pod. The Thanos Compactor ensures that queries are performing well by using the retention configuration, and compaction of the data in storage. To help identify when the Thanos Compactor is experiencing issues, use the four default alerts that are monitoring its health. Read the following table of default alerts: Table 1.1. Table of default Thanos alerts Alert Severity Description ACMThanosCompactHalted critical An alert is sent when the compactor stops. ACMThanosCompactHighCompactionFailures warning An alert is sent when the compaction failure rate is greater than 5 percent. ACMThanosCompactBucketHighOperationFailures warning An alert is sent when the bucket operation failure rate is greater than 5%. ACMThanosCompactHasNotRun warning An alert is sent when the compactor has not uploaded anything in last 24 hours. The observability component deploys an instance of Grafana to enable data visualization with dashboards (static) or data exploration. Red Hat Advanced Cluster Management supports version 8.5.20 of Grafana. You can also design your Grafana dashboard. For more information, see Designing your Grafana dashboard . The Prometheus Alertmanager enables alerts to be forwarded with third-party applications. You can customize the observability service by creating custom recording rules or alerting rules. Red Hat Advanced Cluster Management supports version 0.25 of Prometheus Alertmanager. 1.1.3. Persistent stores used in the observability service Important: Do not use the local storage operator or a storage class that uses local volumes for persistent storage. You can lose data if the pod relaunched on a different node after a restart. When this happens, the pod can no longer access the local storage on the node. Be sure that you can access the persistent volumes of the receive and rules pods to avoid data loss. When you install Red Hat Advanced Cluster Management the following persistent volumes (PV) must be created so that Persistent Volume Claims (PVC) can attach to it automatically. As a reminder, you must define a storage class in the MultiClusterObservability custom resource when there is no default storage class specified or you want to use a non-default storage class to host the PVs. It is recommended to use Block Storage, similar to what Prometheus uses. Also each replica of alertmanager , thanos-compactor , thanos-ruler , thanos-receive-default and thanos-store-shard must have its own PV. View the following table: Table 1.2. Table list of persistent volumes Component name Purpose alertmanager Alertmanager stores the nflog data and silenced alerts in its storage. nflog is an append-only log of active and resolved notifications along with the notified receiver, and a hash digest of contents that the notification identified. observability-thanos-compactor The compactor needs local disk space to store intermediate data for its processing, as well as bucket state cache. The required space depends on the size of the underlying blocks. The compactor must have enough space to download all of the source blocks, then build the compacted blocks on the disk. On-disk data is safe to delete between restarts and should be the first attempt to get crash-looping compactors unstuck. However, it is recommended to give the compactor persistent disks in order to effectively use bucket state cache in between restarts. observability-thanos-rule The thanos ruler evaluates Prometheus recording and alerting rules against a chosen query API by issuing queries at a fixed interval. Rule results are written back to the disk in the Prometheus 2.0 storage format. The amount of hours or days of data retained in this stateful set was fixed in the API version observability.open-cluster-management.io/v1beta1 . It has been exposed as an API parameter in observability.open-cluster-management.io/v1beta2 : RetentionInLocal observability-thanos-receive-default Thanos receiver accepts incoming data (Prometheus remote-write requests) and writes these into a local instance of the Prometheus TSDB. Periodically (every 2 hours), TSDB blocks are uploaded to the object storage for long term storage and compaction. The amount of hours or days of data retained in this stateful set, which acts a local cache was fixed in API Version observability.open-cluster-management.io/v1beta . It has been exposed as an API parameter in observability.open-cluster-management.io/v1beta2 : RetentionInLocal observability-thanos-store-shard It acts primarily as an API gateway and therefore does not need a significant amount of local disk space. It joins a Thanos cluster on startup and advertises the data it can access. It keeps a small amount of information about all remote blocks on local disk and keeps it in sync with the bucket. This data is generally safe to delete across restarts at the cost of increased startup times. Note: The time series historical data is stored in object stores. Thanos uses object storage as the primary storage for metrics and metadata related to them. For more details about the object storage and downsampling, see Enabling observability service . 1.1.4. Additional resources To learn more about observability and the integrated components, see the following topics: See Observability service See Observability configuration See Enabling the observability service See the Thanos documentation . See the Prometheus Overview . See the Alertmanager documentation . 1.2. Observability configuration When the observability service is enabled, the hub cluster is always configured to collect and send metrics to the configured Thanos instance, regardless of whether hub self-management is enabled or not. When the hub cluster is self-managed, the disableHubSelfManagement parameter is set to false , which is the default setting. The multiclusterhub-operator enables the multicluster-observability-operator pod by default. You must configure the multicluster-observability-operator pod. Metrics and alerts for the hub cluster appear in the local-cluster namespace. The local-cluster is only available if hub self-management is enabled. You can query the local-cluster metrics in the Grafana explorer. Continue reading to understand what metrics you can collect with the observability component, and for information about the observability pod capacity. 1.2.1. Metric types By default, OpenShift Container Platform sends metrics to Red Hat using the Telemetry service. The acm_managed_cluster_info is available with Red Hat Advanced Cluster Management and is included with telemetry, but is not displayed on the Red Hat Advanced Cluster Management Observe environments overview dashboard. View the following table of metric types that are supported by the framework: Table 1.3. Parameter table Metric name Metric type Labels/tags Status acm_managed_cluster_info Gauge hub_cluster_id , managed_cluster_id , vendor , cloud , version , available , created_via , core_worker , socket_worker Stable config_policies_evaluation_duration_seconds_bucket Histogram None Stable. Read Governance metric for more details. config_policies_evaluation_duration_seconds_count Histogram None Stable. Refer to Governance metric for more details. config_policies_evaluation_duration_seconds_sum Histogram None Stable. Read Governance metric for more details. policy_governance_info Gauge type , policy , policy_namespace , cluster_namespace Stable. Review Governance metric for more details. policyreport_info Gauge managed_cluster_id , category , policy , result , severity Stable. Read Managing insight _PolicyReports_ for more details. search_api_db_connection_failed_total Counter None Stable. See the Search components section in the Searching in the console documentation. search_api_dbquery_duration_seconds Histogram None Stable. See the Search components section in the Searching in the console documentation. search_api_requests Histogram None Stable. See the Search components section in the Searching in the console documentation. search_indexer_request_count Counter None Stable. See the Search components section in the Searching in the console documentation. search_indexer_request_duration Histogram None Stable. See the Search components section in the Searching in the console documentation. search_indexer_requests_in_flight Gauge None Stable. See the Search components section in the Searching in the console documentation. search_indexer_request_size Histogram None Stable. See the Search components section in the Searching in the console documentation. 1.2.2. Observability pod capacity requests Observability components require 2701mCPU and 11972Mi memory to install the observability service. The following table is a list of the pod capacity requests for five managed clusters with observability-addons enabled: Table 1.4. Observability pod capacity requests Deployment or StatefulSet Container name CPU (mCPU) Memory (Mi) Replicas Pod total CPU Pod total memory observability-alertmanager alertmanager 4 200 3 12 600 config-reloader 4 25 3 12 75 alertmanager-proxy 1 20 3 3 60 observability-grafana grafana 4 100 2 8 200 grafana-dashboard-loader 4 50 2 8 100 observability-observatorium-api observatorium-api 20 128 2 40 256 observability-observatorium-operator observatorium-operator 100 100 1 10 50 observability-rbac-query-proxy rbac-query-proxy 20 100 2 40 200 oauth-proxy 1 20 2 2 40 observability-thanos-compact thanos-compact 100 512 1 100 512 observability-thanos-query thanos-query 300 1024 2 600 2048 observability-thanos-query-frontend thanos-query-frontend 100 256 2 200 512 observability-thanos-query-frontend-memcached memcached 45 128 3 135 384 exporter 5 50 3 15 150 observability-thanos-receive-controller thanos-receive-controller 4 32 1 4 32 observability-thanos-receive-default thanos-receive 300 512 3 900 1536 observability-thanos-rule thanos-rule 50 512 3 150 1536 configmap-reloader 4 25 3 12 75 observability-thanos-store-memcached memcached 45 128 3 135 384 exporter 5 50 3 15 150 observability-thanos-store-shard thanos-store 100 1024 3 300 3072 1.2.3. Additional resources For more information about enabling observability, read Enabling the observability service . Read Customizing observability to learn how to configure the observability service, view metrics and other data. Read Using Grafana dashboards . Learn from the OpenShift Container Platform documentation what types of metrics are collected and sent using telemetry. See Information collected by Telemetry for information. Refer to Governance metric for details. Refer to Prometheus recording rules . Also refer to Prometheus alerting rules . 1.3. Enabling the observability service When you enable the observability service on your hub cluster, the multicluster-observability-operator watches for new managed clusters and automatically deploys metric and alert collection services to the managed clusters. You can use metrics and configure Grafana dashboards to make cluster resource information visible, help you save cost, and prevent service disruptions. Monitor the status of your managed clusters with the observability component, also known as the multicluster-observability-operator pod. Required access: Cluster administrator, the open-cluster-management:cluster-manager-admin role, or S3 administrator. Prerequisites Enabling observability from the command line interface Creating the MultiClusterObservability custom resource Enabling observability from the Red Hat OpenShift Container Platform console Disabling observability Removing observability 1.3.1. Prerequisites You must install Red Hat Advanced Cluster Management for Kubernetes. See Installing while connected online for more information. You must define a storage class in the MultiClusterObservability custom resource, if there is no default storage class specified. Direct network access to the hub cluster is required. Network access to load balancers and proxies are not supported. For more information, see Networking . You must configure an object store to create a storage solution. Important: When you configure your object store, ensure that you meet the encryption requirements that are necessary when sensitive data is persisted. The observability service uses Thanos supported, stable object stores. You might not be able to share an object store bucket by multiple Red Hat Advanced Cluster Management observability installations. Therefore, for each installation, provide a separate object store bucket. Red Hat Advanced Cluster Management supports the following cloud providers with stable object stores: Amazon Web Services S3 (AWS S3) Red Hat Ceph (S3 compatible API) Google Cloud Storage Azure storage Red Hat OpenShift Data Foundation, formerly known as Red Hat OpenShift Container Storage Red Hat OpenShift on IBM (ROKS) 1.3.2. Enabling observability from the command line interface Enable the observability service by creating a MultiClusterObservability custom resource instance. Before you enable observability, see Observability pod capacity requests for more information. Note: When observability is enabled or disabled on OpenShift Container Platform managed clusters that are managed by Red Hat Advanced Cluster Management, the observability endpoint operator updates the cluster-monitoring-config config map by adding additional alertmanager configuration that automatically restarts the local Prometheus. The observability endpoint operator updates the cluster-monitoring-config config map by adding additional alertmanager configurations that automatically restart the local Prometheus. When you insert the alertmanager configuration in the OpenShift Container Platform managed cluster, the configuration removes the settings that relate to the retention field of the Prometheus metrics. Complete the following steps to enable the observability service: Log in to your Red Hat Advanced Cluster Management hub cluster. Create a namespace for the observability service with the following command: oc create namespace open-cluster-management-observability Generate your pull-secret. If Red Hat Advanced Cluster Management is installed in the open-cluster-management namespace, run the following command: DOCKER_CONFIG_JSON=`oc extract secret/multiclusterhub-operator-pull-secret -n open-cluster-management --to=-` If the multiclusterhub-operator-pull-secret is not defined in the namespace, copy the pull-secret from the openshift-config namespace into the open-cluster-management-observability namespace. Run the following command: DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-` Create the pull-secret in the open-cluster-management-observability namespace, run the following command: oc create secret generic multiclusterhub-operator-pull-secret \ -n open-cluster-management-observability \ --from-literal=.dockerconfigjson="USDDOCKER_CONFIG_JSON" \ --type=kubernetes.io/dockerconfigjson Important: If you modify the global pull secret for your cluster by using the OpenShift Container Platform documentation, be sure to also update the global pull secret in the observability namespace. See Updating the global pull secret for more details. Create a secret for your object storage for your cloud provider. Your secret must contain the credentials to your storage solution. For example, run the following command: oc create -f thanos-object-storage.yaml -n open-cluster-management-observability View the following examples of secrets for the supported object stores: For Amazon S3 or S3 compatible, your secret might resemble the following file: apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: YOUR_S3_BUCKET endpoint: YOUR_S3_ENDPOINT 1 insecure: true access_key: YOUR_ACCESS_KEY secret_key: YOUR_SECRET_KEY 1 Enter the URL without the protocol. Enter the URL for your Amazon S3 endpoint that might resemble the following URL: s3.us-east-1.amazonaws.com . For more details, see the Amazon Simple Storage Service user guide . For Google Cloud Platform, your secret might resemble the following file: apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: GCS config: bucket: YOUR_GCS_BUCKET service_account: YOUR_SERVICE_ACCOUNT For more details, see Google Cloud Storage . For Azure your secret might resemble the following file: apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: AZURE config: storage_account: YOUR_STORAGE_ACCT storage_account_key: YOUR_STORAGE_KEY container: YOUR_CONTAINER endpoint: blob.core.windows.net 1 max_retries: 0 1 If you use the msi_resource path, the endpoint authentication is complete by using the system-assigned managed identity. Your value must resemble the following endpoint: https://<storage-account-name>.blob.core.windows.net . If you use the user_assigned_id path, endpoint authentication is complete by using the user-assigned managed identity. When you use the user_assigned_id , the msi_resource endpoint default value is https:<storage_account>.<endpoint> . For more details, see Azure Storage documentation . Note: If you use Azure as an object storage for a Red Hat OpenShift Container Platform cluster, the storage account associated with the cluster is not supported. You must create a new storage account. For Red Hat OpenShift Data Foundation, your secret might resemble the following file: apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: YOUR_RH_DATA_FOUNDATION_BUCKET endpoint: YOUR_RH_DATA_FOUNDATION_ENDPOINT 1 insecure: false access_key: YOUR_RH_DATA_FOUNDATION_ACCESS_KEY secret_key: YOUR_RH_DATA_FOUNDATION_SECRET_KEY 1 Enter the URL without the protocol. Enter the URL for your Red Hat OpenShift Data Foundation endpoint that might resemble the following URL: example.redhat.com:443 . For more details, see Red Hat OpenShift Data Foundation . For Red Hat OpenShift on IBM (ROKS), your secret might resemble the following file: apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: YOUR_ROKS_S3_BUCKET endpoint: YOUR_ROKS_S3_ENDPOINT 1 insecure: true access_key: YOUR_ROKS_ACCESS_KEY secret_key: YOUR_ROKS_SECRET_KEY 1 Enter the URL without the protocol. Enter the URL for your Red Hat OpenShift Data Foundation endpoint that might resemble the following URL: example.redhat.com:443 . For more details, follow the IBM Cloud documentation, Cloud Object Storage . Be sure to use the service credentials to connect with the object storage. For more details, follow the IBM Cloud documentation, Cloud Object Store and Service Credentials . 1.3.2.1. Configuring storage for AWS Security Token Service For Amazon S3 or S3 compatible storage, you can also use short term, limited-privilege credentials that are generated with AWS Security Token Service (AWS STS). Refer to AWS Security Token Service documentation for more details. Generating access keys using AWS Security Service require the following additional steps: Create an IAM policy that limits access to an S3 bucket. Create an IAM role with a trust policy to generate JWT tokens for OpenShift Container Platform service accounts. Specify annotations for the observability service accounts that requires access to the S3 bucket. You can find an example of how observability on Red Hat OpenShift Service on AWS (ROSA) cluster can be configured to work with AWS STS tokens in the Set environment step. See Red Hat OpenShift Service on AWS (ROSA) for more details, along with ROSA with STS explained for an in-depth description of the requirements and setup to use STS tokens. 1.3.2.2. Generating access keys using the AWS Security Service Complete the following steps to generate access keys using the AWS Security Service: Set up the AWS environment. Run the following commands: export POLICY_VERSION=USD(date +"%m-%d-%y") export TRUST_POLICY_VERSION=USD(date +"%m-%d-%y") export CLUSTER_NAME=<my-cluster> export S3_BUCKET=USDCLUSTER_NAME-acm-observability export REGION=us-east-2 export NAMESPACE=open-cluster-management-observability export SA=tbd export SCRATCH_DIR=/tmp/scratch export OIDC_PROVIDER=USD(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer| sed -e "s/^https:\/\///") export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export AWS_PAGER="" rm -rf USDSCRATCH_DIR mkdir -p USDSCRATCH_DIR Create an S3 bucket with the following command: aws s3 mb s3://USDS3_BUCKET Create a s3-policy JSON file for access to your S3 bucket. Run the following command: { "Version": "USDPOLICY_VERSION", "Statement": [ { "Sid": "Statement", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:PutObjectAcl", "s3:CreateBucket", "s3:DeleteBucket" ], "Resource": [ "arn:aws:s3:::USDS3_BUCKET/*", "arn:aws:s3:::USDS3_BUCKET" ] } ] } Apply the policy with the following command: S3_POLICY=USD(aws iam create-policy --policy-name USDCLUSTER_NAME-acm-obs \ --policy-document file://USDSCRATCH_DIR/s3-policy.json \ --query 'Policy.Arn' --output text) echo USDS3_POLICY Create a TrustPolicy JSON file. Run the following command: { "Version": "USDTRUST_POLICY_VERSION", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_PROVIDER}:sub": [ "system:serviceaccount:USD{NAMESPACE}:observability-thanos-query", "system:serviceaccount:USD{NAMESPACE}:observability-thanos-store-shard", "system:serviceaccount:USD{NAMESPACE}:observability-thanos-compact" "system:serviceaccount:USD{NAMESPACE}:observability-thanos-rule", "system:serviceaccount:USD{NAMESPACE}:observability-thanos-receive", ] } } } ] } Create a role for AWS Prometheus and CloudWatch with the following command: S3_ROLE=USD(aws iam create-role \ --role-name "USDCLUSTER_NAME-acm-obs-s3" \ --assume-role-policy-document file://USDSCRATCH_DIR/TrustPolicy.json \ --query "Role.Arn" --output text) echo USDS3_ROLE Attach the policies to the role. Run the following command: aws iam attach-role-policy \ --role-name "USDCLUSTER_NAME-acm-obs-s3" \ --policy-arn USDS3_POLICY Your secret might resemble the following file. The config section specifies signature_version2: false and does not specify access_key and secret_key : apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: USDS3_BUCKET endpoint: s3.USDREGION.amazonaws.com signature_version2: false Specify service account annotations when you the MultiClusterObservability custom resource as described in Creating the MultiClusterObservability custom resource section. Retrieve the S3 access key and secret key for your cloud providers with the following commands. You must decode, edit, and encode your base64 string in the secret: To edit and decode the S3 access key for your cloud provider, run the following command: YOUR_CLOUD_PROVIDER_ACCESS_KEY=USD(oc -n open-cluster-management-observability get secret <object-storage-secret> -o jsonpath="{.data.thanos\.yaml}" | base64 --decode | grep access_key | awk '{print USD2}') To view the access key for your cloud provider, run the following command: echo USDYOUR_CLOUD_PROVIDER_ACCESS_KEY To edit and decode the secret key for your cloud provider, run the following command: YOUR_CLOUD_PROVIDER_SECRET_KEY=USD(oc -n open-cluster-management-observability get secret <object-storage-secret> -o jsonpath="{.data.thanos\.yaml}" | base64 --decode | grep secret_key | awk '{print USD2}') Run the following command to view the secret key for your cloud provider: echo USDSECRET_KEY Verify that observability is enabled by checking the pods for the following deployments and stateful sets. You might receive the following information: 1.3.2.3. Creating the MultiClusterObservability custom resource Use the MultiClusterObservability custom resource to specify the persistent volume storage size for various components. You must set the storage size during the initial creation of the MultiClusterObservability custom resource. When you update the storage size values post-deployment, changes take effect only if the storage class supports dynamic volume expansion. For more information, see Expanding persistent volumes from the Red Hat OpenShift Container Platform documentation . Complete the following steps to create the MultiClusterObservability custom resource on your hub cluster: Create the MultiClusterObservability custom resource YAML file named multiclusterobservability_cr.yaml . View the following default YAML file for observability: apiVersion: observability.open-cluster-management.io/v1beta2 kind: MultiClusterObservability metadata: name: observability spec: observabilityAddonSpec: {} storageConfig: metricObjectStorage: name: thanos-object-storage key: thanos.yaml You might want to modify the value for the retentionConfig parameter in the advanced section. For more information, see Thanos Downsampling resolution and retention . Depending on the number of managed clusters, you might want to update the amount of storage for stateful sets. If your S3 bucket is configured to use STS tokens, annotate the service accounts to use STS with S3 role. View the following configuration: spec: advanced: compact: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE store: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE rule: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE receive: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE query: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE See Observability API for more information. To deploy on infrastructure machine sets, you must set a label for your set by updating the nodeSelector in the MultiClusterObservability YAML. Your YAML might resemble the following content: nodeSelector: node-role.kubernetes.io/infra: "" For more information, see Creating infrastructure machine sets . Apply the observability YAML to your cluster by running the following command: oc apply -f multiclusterobservability_cr.yaml All the pods in open-cluster-management-observability namespace for Thanos, Grafana and Alertmanager are created. All the managed clusters connected to the Red Hat Advanced Cluster Management hub cluster are enabled to send metrics back to the Red Hat Advanced Cluster Management Observability service. Validate that the observability service is enabled and the data is populated by launching the Grafana dashboards. Click the Grafana link that is near the console header, from either the console Overview page or the Clusters page. Alternatively, access the OpenShift Container Platform 3.11 Grafana dashboards with the following URL: https://USDACM_URL/grafana/dashboards . To view the OpenShift Container Platform 3.11 dashboards, select the folder named OCP 3.11 . Access the multicluster-observability-operator deployment to verify that the multicluster-observability-operator pod is being deployed by the multiclusterhub-operator deployment. Run the following command: oc get deploy multicluster-observability-operator -n open-cluster-management --show-labels NAME READY UP-TO-DATE AVAILABLE AGE LABELS multicluster-observability-operator 1/1 1 1 35m installer.name=multiclusterhub,installer.namespace=open-cluster-management View the labels section of the multicluster-observability-operator deployment for labels that are associated with the resource. The labels section might contain the following details: labels: installer.name: multiclusterhub installer.namespace: open-cluster-management Optional: If you want to exclude specific managed clusters from collecting the observability data, add the following cluster label to your clusters: observability: disabled . The observability service is enabled. After you enable the observability service, the following functions are initiated: All the alert managers from the managed clusters are forwarded to the Red Hat Advanced Cluster Management hub cluster. All the managed clusters that are connected to the Red Hat Advanced Cluster Management hub cluster are enabled to send alerts back to the Red Hat Advanced Cluster Management observability service. You can configure the Red Hat Advanced Cluster Management Alertmanager to take care of deduplicating, grouping, and routing the alerts to the correct receiver integration such as email, PagerDuty, or OpsGenie. You can also handle silencing and inhibition of the alerts. Note: Alert forwarding to the Red Hat Advanced Cluster Management hub cluster feature is only supported by managed clusters on a supported OpenShift Container Platform version. After you install Red Hat Advanced Cluster Management with observability enabled, alerts are automatically forwarded to the hub cluster. See Forwarding alerts to learn more. 1.3.3. Enabling observability from the Red Hat OpenShift Container Platform console Optionally, you can enable observability from the Red Hat OpenShift Container Platform console by creating a project named open-cluster-management-observability . Complete the following steps: Create an image pull-secret named, multiclusterhub-operator-pull-secret in the open-cluster-management-observability project. Create your object storage secret named, thanos-object-storage in the open-cluster-management-observability project. Enter the object storage secret details, then click Create . See step four of the Enabling observability section to view an example of a secret. Create the MultiClusterObservability custom resource instance. When you receive the following message, the observability service is enabled successfully from OpenShift Container Platform: Observability components are deployed and running . 1.3.3.1. Verifying the Thanos version After Thanos is deployed on your cluster, verify the Thanos version from the command line interface (CLI). After you log in to your hub cluster, run the following command in the observability pods to receive the Thanos version: thanos --version The Thanos version is displayed. 1.3.4. Disabling observability You can disable observability, which stops data collection on the Red Hat Advanced Cluster Management hub cluster. 1.3.4.1. Disabling observability on all clusters Disable observability by removing observability components on all managed clusters. Update the multicluster-observability-operator resource by setting enableMetrics to false . Your updated resource might resemble the following change: spec: imagePullPolicy: Always imagePullSecret: multiclusterhub-operator-pull-secret observabilityAddonSpec: # The ObservabilityAddonSpec defines the global settings for all managed clusters which have observability add-on enabled enableMetrics: false #indicates the observability addon push metrics to hub server 1.3.4.2. Disabling observability on a single cluster Disable observability by removing observability components on specific managed clusters. Complete the following steps: Add the observability: disabled label to the managedclusters.cluster.open-cluster-management.io custom resource. From the Red Hat Advanced Cluster Management console Clusters page, add the observability=disabled label to the specified cluster. Note: When a managed cluster with the observability component is detached, the metrics-collector deployments are removed. 1.3.5. Removing observability When you remove the MultiClusterObservability custom resource, you are disabling and uninstalling the observability service. From the OpenShift Container Platform console navigation, select Operators > Installed Operators > Advanced Cluster Manager for Kubernetes . Remove the MultiClusterObservability custom resource. 1.3.6. Additional resources Links to cloud provider documentation for object storage information: Amazon Web Services S3 (AWS S3) Red Hat Ceph (S3 compatible API) Google Cloud Storage Azure storage Red Hat OpenShift Data Foundation (formerly known as Red Hat OpenShift Container Storage) Red Hat OpenShift on IBM (ROKS) See Using observability . To learn more about customizing the observability service, see Customizing observability . For more related topics, return to the Observability service . 1.4. Customizing observability configuration After you enable observability, customize the observability configuration to the specific needs of your environment. Manage and view cluster fleet data that the observability service collects. Required access: Cluster administrator Creating custom rules Adding custom metrics Adding advanced configuration for retention Updating the MultiClusterObservability custom resource replicas from the console Increasing and decreasing persistent volumes and persistent volume claims Customizing route certification Customizing certificates for accessing the object store Configuring proxy settings for observability add-ons Disabling proxy settings for observability add-ons 1.4.1. Creating custom rules Create custom rules for the observability installation by adding Prometheus recording rules and alerting rules to the observability resource. To precalculate expensive expressions, use the recording rules abilities with Prometheus to create alert conditions and send notifications based on how you want to send an alert to an external service. The results are saved as a new set of time series. View the following examples to create a custom alert rule within the observability-thanos-rule-custom-rules config map: To get a notification for when your CPU usage paases your defined value, create the following custom alert rule: data: custom_rules.yaml: | groups: - name: cluster-health rules: - alert: ClusterCPUHealth-jb annotations: summary: Notify when CPU utilization on a cluster is greater than the defined utilization limit description: "The cluster has a high CPU usage: {{ USDvalue }} core for {{ USDlabels.cluster }} {{ USDlabels.clusterID }}." expr: | max(cluster:cpu_usage_cores:sum) by (clusterID, cluster, prometheus) > 0 for: 5s labels: cluster: "{{ USDlabels.cluster }}" prometheus: "{{ USDlabels.prometheus }}" severity: critical Notes: When you update your custom rules, observability-thanos-rule pods restart automatically. You can create multiple rules in the configuration. The default alert rules are in the observability-thanos-rule-default-rules config map of the open-cluster-management-observability namespace. To create a custom recording rule to get the sum of the container memory cache of a pod, create the following custom rule: data: custom_rules.yaml: | groups: - name: container-memory rules: - record: pod:container_memory_cache:sum expr: sum(container_memory_cache{pod!=""}) BY (pod, container) Note: After you make changes to the config map, the configuration automatically reloads. The configuration reloads because of the config-reload within the observability-thanos-rule sidecar. To verify that the alert rules are functioning correctly, go to the Grafana dashboard, select the Explore page, and query ALERTS . The alert is only available in Grafana if you created the alert. 1.4.2. Adding custom metrics Add metrics to the metrics_list.yaml file to collect from managed clusters. Complete the following steps: Before you add a custom metric, verify that mco observability is enabled with the following command: oc get mco observability -o yaml Check for the following message in the status.conditions.message section reads: Observability components are deployed and running Create the observability-metrics-custom-allowlist config map in the open-cluster-management-observability namespace with the following command: oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml Add the name of the custom metric to the metrics_list.yaml parameter. Your YAML for the config map might resemble the following content: kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist data: metrics_list.yaml: | names: 1 - node_memory_MemTotal_bytes rules: 2 - record: apiserver_request_duration_seconds:histogram_quantile_90 expr: histogram_quantile(0.90,sum(rate(apiserver_request_duration_seconds_bucket{job=\"apiserver\", verb!=\"WATCH\"}[5m])) by (verb,le)) 1 Optional: Add the name of the custom metrics that are to be collected from the managed cluster. 2 Optional: Enter only one value for the expr and record parameter pair to define the query expression. The metrics are collected as the name that is defined in the record parameter from your managed cluster. The metric value returned are the results after you run the query expression. You can use either one or both of the sections. For user workload metrics, see the Adding user workload metrics section. Note: You can also individually customize each managed cluster in the custom metrics allowlist instead of applying it across your entire fleet. You can create the same YAML directly on your managed cluster to customize it. Verify the data collection from your custom metric by querying the metric from the Grafana dashboard Explore page. You can also use the custom metrics in your own dashboard. 1.4.2.1. Adding user workload metrics Collect OpenShift Container Platform user-defined metrics from workloads in OpenShift Container Platform to display the metrics from your Grafana dashboard. Complete the following steps: Enable monitoring on your OpenShift Container Platform cluster. See Enabling monitoring for user-defined projects in the Additional resources section. If you have a managed cluster with monitoring for user-defined workloads enabled, the user workloads are located in the test namespace and generate metrics. These metrics are collected by Prometheus from the OpenShift Container Platform user workload. Add user workload metrics to the observability-metrics-custom-allowlist config map to collect the metrics in the test namespace. View the following example: kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist namespace: test data: uwl_metrics_list.yaml: 1 names: 2 - sample_metrics 1 Enter the key for the config map data. 2 Enter the value of the config map data in YAML format. The names section includes the list of metric names, which you want to collect from the test namespace. After you create the config map, the observability collector collects and pushes the metrics from the target namespace to the hub cluster. 1.4.2.2. Removing default metrics If you do not want to collect data for a specific metric from your managed cluster, remove the metric from the observability-metrics-custom-allowlist.yaml file. When you remove a metric, the metric data is not collected from your managed clusters. Complete the following steps to remove a default metric: Verify that mco observability is enabled by using the following command: oc get mco observability -o yaml Add the name of the default metric to the metrics_list.yaml parameter with a hyphen - at the start of the metric name. View the following metric example: -cluster_infrastructure_provider Create the observability-metrics-custom-allowlist config map in the open-cluster-management-observability namespace with the following command: oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml Verify that the observability service is not collecting the specific metric from your managed clusters. When you query the metric from the Grafana dashboard, the metric is not displayed. 1.4.3. Adding advanced configuration for retention To update the retention for each observability component according to your need, add the advanced configuration section. Complete the following steps: Edit the MultiClusterObservability custom resource with the following command: oc edit mco observability -o yaml Add the advanced section to the file. Your YAML file might resemble the following contents: spec: advanced: retentionConfig: blockDuration: 2h deleteDelay: 48h retentionInLocal: 24h retentionResolutionRaw: 365d retentionResolution5m: 365d retentionResolution1h: 365d receive: resources: limits: memory: 4096Gi replicas: 3 Notes: For descriptions of all the parameters that can added into the advanced configuration, see the Observability API documentation. The default retention for all resolution levels, such as retentionResolutionRaw , retentionResolution5m , or retentionResolution1h , is 365 days ( 365d ). You must set an explicit value for the resolution retention in your MultiClusterObservability spec.advanced.retentionConfig parameter. If you upgraded from an earlier version and want to keep that version retention configuration, add the configuration previously mentioned. Complete the following steps: Go to your MultiClusterObservability resource by running the following command: oc edit mco observability In the spec.advanced.retentionConfig parameter, apply the following configuration: spec: advanced: retentionConfig: retentionResolutionRaw: 365d retentionResolution5m: 365d retentionResolution1h: 365d 1.4.4. Dynamic metrics for single-node OpenShift clusters Dynamic metrics collection supports automatic metric collection based on certain conditions. By default, a single-node OpenShift cluster does not collect pod and container resource metrics. Once a single-node OpenShift cluster reaches a specific level of resource consumption, the defined granular metrics are collected dynamically. When the cluster resource consumption is consistently less than the threshold for a period of time, granular metric collection stops. The metrics are collected dynamically based on the conditions on the managed cluster specified by a collection rule. Because these metrics are collected dynamically, the following Red Hat Advanced Cluster Management Grafana dashboards do not display any data. When a collection rule is activated and the corresponding metrics are collected, the following panels display data for the duration of the time that the collection rule is initiated: Kubernetes/Compute Resources/Namespace (Pods) Kubernetes/Compute Resources/Namespace (Workloads) Kubernetes/Compute Resources/Nodes (Pods) Kubernetes/Compute Resources/Pod Kubernetes/Compute Resources/Workload A collection rule includes the following conditions: A set of metrics to collect dynamically. Conditions written as a PromQL expression. A time interval for the collection, which must be set to true . A match expression to select clusters where the collect rule must be evaluated. By default, collection rules are evaluated continuously on managed clusters every 30 seconds, or at a specific time interval. The lowest value between the collection interval and time interval takes precedence. Once the collection rule condition persists for the duration specified by the for attribute, the collection rule starts and the metrics specified by the rule are automatically collected on the managed cluster. Metrics collection stops automatically after the collection rule condition no longer exists on the managed cluster, at least 15 minutes after it starts. The collection rules are grouped together as a parameter section named collect_rules , where it can be enabled or disabled as a group. Red Hat Advanced Cluster Management installation includes the collection rule group, SNOResourceUsage with two default collection rules: HighCPUUsage and HighMemoryUsage . The HighCPUUsage collection rule begins when the node CPU usage exceeds 70%. The HighMemoryUsage collection rule begins if the overall memory utilization of the single-node OpenShift cluster exceeds 70% of the available node memory. Currently, the previously mentioned thresholds are fixed and cannot be changed. When a collection rule begins for more than the interval specified by the for attribute, the system automatically starts collecting the metrics that are specified in the dynamic_metrics section. View the list of dynamic metrics that from the collect_rules section, in the following YAML file: collect_rules: - group: SNOResourceUsage annotations: description: > By default, a {sno} cluster does not collect pod and container resource metrics. Once a {sno} cluster reaches a level of resource consumption, these granular metrics are collected dynamically. When the cluster resource consumption is consistently less than the threshold for a period of time, collection of the granular metrics stops. selector: matchExpressions: - key: clusterType operator: In values: ["{sno}"] rules: - collect: SNOHighCPUUsage annotations: description: > Collects the dynamic metrics specified if the cluster cpu usage is constantly more than 70% for 2 minutes expr: (1 - avg(rate(node_cpu_seconds_total{mode=\"idle\"}[5m]))) * 100 > 70 for: 2m dynamic_metrics: names: - container_cpu_cfs_periods_total - container_cpu_cfs_throttled_periods_total - kube_pod_container_resource_limits - kube_pod_container_resource_requests - namespace_workload_pod:kube_pod_owner:relabel - node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate - node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate - collect: SNOHighMemoryUsage annotations: description: > Collects the dynamic metrics specified if the cluster memory usage is constantly more than 70% for 2 minutes expr: (1 - sum(:node_memory_MemAvailable_bytes:sum) / sum(kube_node_status_allocatable{resource=\"memory\"})) * 100 > 70 for: 2m dynamic_metrics: names: - kube_pod_container_resource_limits - kube_pod_container_resource_requests - namespace_workload_pod:kube_pod_owner:relabel matches: - __name__="container_memory_cache",container!="" - __name__="container_memory_rss",container!="" - __name__="container_memory_swap",container!="" - __name__="container_memory_working_set_bytes",container!="" A collect_rules.group can be disabled in the custom-allowlist as shown in the following example. When a collect_rules.group is disabled, metrics collection reverts to the behavior. These metrics are collected at regularly, specified intervals: collect_rules: - group: -SNOResourceUsage The data is only displayed in Grafana when the rule is initiated. 1.4.5. Updating the MultiClusterObservability custom resource replicas from the console If your workload increases, increase the number of replicas of your observability pods. Navigate to the Red Hat OpenShift Container Platform console from your hub cluster. Locate the MultiClusterObservability custom resource, and update the replicas parameter value for the component where you want to change the replicas. Your updated YAML might resemble the following content: spec: advanced: receive: replicas: 6 For more information about the parameters within the mco observability custom resource, see the Observability API documentation. 1.4.6. Increasing and decreasing persistent volumes and persistent volume claims Increase and decrease the persistent volume and persistent volume claims to change the amount of storage in your storage class. Complete the following steps: To increase the size of the persistent volume, update the MultiClusterObservability custom resource if the storage class support expanding volumes. To decrease the size of the persistent volumes remove the pods using the persistent volumes, delete the persistent volume and recreate them. You might experience data loss in the persistent volume. Complete the following steps: Pause the MultiClusterObservability operator by adding the annotation mco-pause: "true" to the MultiClusterObservability custom resource. Look for the stateful sets or deployments of the desired component. Change their replica count to 0 . This initiates a shutdown, which involves uploading local data when applicable to avoid data loss. For example, the Thanos Receive stateful set is named observability-thanos-receive-default and has three replicas by default. Therefore, you are looking for the following persistent volume claims: data-observability-thanos-receive-default-0 data-observability-thanos-receive-default-1 data-observability-thanos-receive-default-2 Delete the persistent volumes and persistent volume claims used by the desired component. In the MultiClusterObservability custom resource, edit the storage size in the configuration of the component to the desired amount in the storage size field. Prefix with the name of the component. Unpause the MultiClusterObservability operator by removing the previously added annotation. To initiate a reconcilation after having the operator paused, delete the multicluster-observability-operator and observatorium-operator pods. The pods are recreated and reconciled immediately. Verify that persistent volume and volume claims are updated by checking the MultiClusterObservability custom resource. 1.4.7. Customizing route certificate If you want to customize the OpenShift Container Platform route certification, you must add the routes in the alt_names section. To ensure your OpenShift Container Platform routes are accessible, add the following information: alertmanager.apps.<domainname> , observatorium-api.apps.<domainname> , rbac-query-proxy.apps.<domainname> . For more details, see Replacing certificates for alertmanager route in the Governance documentation. Note: Users are responsible for certificate rotations and updates. 1.4.8. Customizing certificates for accessing the object store You can configure secure connections with the observability object store by creating a Secret resource that contains the certificate authority and configuring the MultiClusterObservability custom resource. Complete the following steps: To validate the object store connection, create the Secret object in the file that contains the certificate authority by using the following command: oc create secret generic <tls_secret_name> --from-file=ca.crt=<path_to_file> -n open-cluster-management-observability Alternatively, you can apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <tls_secret_name> namespace: open-cluster-management-observability type: Opaque data: ca.crt: <base64_encoded_ca_certificate> Optional: If you want to enable mutual TLS, you need to add the public.crt and private.key keys in the secret. Add the TLS secret details to the metricObjectStorage section by using the following command: oc edit mco observability -o yaml Your file might resemble the following YAML: metricObjectStorage: key: thanos.yaml name: thanos-object-storage tlsSecretName: tls-certs-secret 1 tlsSecretMountPath: /etc/minio/certs 2 1 The value for tlsSecretName is the name of the Secret object that you previously created. 2 The /etc/minio/certs/ path defined for the tlsSecretMountPath parameter specifies where the certificates are mounted in the Observability components. This path is required for the step. Update the thanos.yaml definition in the thanos-object-storage secret by adding the http_config.tls_config section with the certificate details. View the following example: thanos.yaml: | type: s3 config: bucket: "thanos" endpoint: "minio:9000" insecure: false 1 access_key: "minio" secret_key: "minio123" http_config: tls_config: ca_file: /etc/minio/certs/ca.crt 2 insecure_skip_verify: false 1 Set the insecure parameter to false to enable HTTPS. 2 The path for the ca_file parameter must match the tlsSecretMountPath from the MultiClusterObservability custom resource. The ca.crt must match the key in the <tls_secret_name> Secret resource. Optional: If you want to enable mutual TLS, you need to add the cert_file and key_file keys to the tls_config section. See the following example: thanos.yaml: | type: s3 config: bucket: "thanos" endpoint: "minio:9000" insecure: false access_key: "minio" secret_key: "minio123" http_config: tls_config: ca_file: /etc/minio/certs/ca.crt 1 cert_file: /etc/minio/certs/public.crt key_file: /etc/minio/certs/private.key insecure_skip_verify: false 1 The path for ca_file , cert_file , and key_file must match the tlsSecretMountPath from the MultiClusterObservability custom resource. The ca.crt , public.crt , and private.crt must match the respective key in the tls_secret_name> Secret resource. To verify that you can access the object store, check that the pods are deployed. Run the following command: oc -n open-cluster-management-observability get pods -l app.kubernetes.io/name=thanos-store 1.4.9. Configuring proxy settings for observability add-ons Configure the proxy settings to allow the communications from the managed cluster to access the hub cluster through a HTTP and HTTPS proxy server. Typically, add-ons do not need any special configuration to support HTTP and HTTPS proxy servers between a hub cluster and a managed cluster. But if you enabled the observability add-on, you must complete the proxy configuration. 1.4.9.1. Prerequisite You have a hub cluster. You have enabled the proxy settings between the hub cluster and managed cluster. Complete the following steps to configure the proxy settings for the observability add-on: Go to the cluster namespace on your hub cluster. Create an AddOnDeploymentConfig resource with the proxy settings by adding a spec.proxyConfig parameter. View the following YAML example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: <addon-deploy-config-name> namespace: <managed-cluster-name> spec: agentInstallNamespace: open-cluster-managment-addon-observability proxyConfig: httpsProxy: "http://<username>:<password>@<ip>:<port>" 1 noProxy: ".cluster.local,.svc,172.30.0.1" 2 1 For this field, you can specify either a HTTP proxy or a HTTPS proxy. 2 Include the IP address of the kube-apiserver . To get the IP address, run following command on your managed cluster: oc -n default describe svc kubernetes | grep IP: Go to the ManagedClusterAddOn resource and update it by referencing the AddOnDeploymentConfig resource that you made. View the following YAML example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: observability-controller namespace: <managed-cluster-name> spec: installNamespace: open-cluster-managment-addon-observability configs: - group: addon.open-cluster-management.io resource: AddonDeploymentConfig name: <addon-deploy-config-name> namespace: <managed-cluster-name> Verify the proxy settings. If you successfully configured the proxy settings, the metric collector deployed by the observability add-on agent on the managed cluster sends the data to the hub cluster. Complete the following steps: Go to the hub cluster then the managed cluster on the Grafana dashboard. View the metrics for the proxy settings. 1.4.10. Disabling proxy settings for observability add-ons If your development needs change, you might need to disable the proxy setting for the observability add-ons you configured for the hub cluster and managed cluster. You can disable the proxy settings for the observability add-on at any time. Complete the following steps: Go to the ManagedClusterAddOn resource. Remove the referenced AddOnDeploymentConfig resource. 1.4.11. Customizing the managed cluster Observatorium API and Alertmanager URLs (Technology Preview) You can customize the Observatorium API and Alertmanager URLs that the managed cluster uses to communicate with the hub cluster to maintain all Red Hat Advanced Cluster Management functions when you use a load balancer or reserve proxy. To customize the URLs, complete the following steps: Add your URLs to the advanced section of the MultiClusterObservability spec . See the following example: spec: advanced: customObservabilityHubURL: <yourURL> customAlertmanagerHubURL: <yourURL> Notes: Only HTTPS URLs are supported. If you do not add https:// to your URL, the scheme is added automatically. You can include the standard path for the Remote Write API, /api/metrics/v1/default/api/v1/receive in the customObservabilityHubURL spec . If you do not include the path, the Observability service automatically adds the path at runtime. Any intermediate component you use for the custom Observability hub cluster URL cannot use TLS termination because the component relies on MTLS authentication. The custom Alertmanager hub cluster URL supports intermediate component TLS termination by using your own existing certificate instructions. If you are using a customObservabilityHubURL , create a route object by using the following template. Replace <intermediate_component_url> with the intermediate component URL: apiVersion: route.openshift.io/v1 kind: Route metadata: name: proxy-observatorium-api namespace: open-cluster-management-observability spec: host: <intermediate_component_url> port: targetPort: public tls: insecureEdgeTerminationPolicy: None termination: passthrough to: kind: Service name: observability-observatorium-api weight: 100 wildcardPolicy: None If you are using a customAlertmanagerHubURL , create a route object by using the following template. Replace <intermediate_component_url> with the intermediate component URL: apiVersion: route.openshift.io/v1 kind: Route metadata: name: alertmanager-proxy namespace: open-cluster-management-observability spec: host: <intermediate_component_url> path: /api/v2 port: targetPort: oauth-proxy tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt to: kind: Service name: alertmanager weight: 100 wildcardPolicy: None 1.4.12. Configuring fine-grain RBAC (Technology Preview) To restrict metric access to specific namespaces within the cluster, use fine-grain role-based access control (RBAC). Using fine-grain RBAC, you can allow application teams to only view the metrics for the namespaces that you give them permission to access. You must configure metric access control on the hub cluster for the users of that hub cluster. On this hub cluster, a ManagedCluster custom resource represents every managed cluster. To configure RBAC and to select the allowed namespaces, use the rules and action verbs specified in the ManagedCluster custom resources. For example, you have an application named, my-awesome-app , and this application is on two different managed clusters, devcluster1 and devcluster2 . Both clusters are in the AwesomeAppNS namespace. You have an admin user group named, my-awesome-app-admins , and you want to restrict this user group to only have access to metrics from only these two namespaces on the hub cluster. In this example, to use fine-grain RBAC to restrict the user group access, complete the following steps: Define a ClusterRole resource with permissions to access metrics. Your resource might resemble the following YAML: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: awesome-app-metrics-role rules: - apiGroups: - "cluster.open-cluster-management.io" resources: - managedclusters: 1 resourceNames: 2 - devcluster1 - devcluster2 verbs: 3 - metrics/AwesomeAppNS 1 Represents the parameter values for the managed clusters. 2 Represents the list of managed clusters. 3 Represents the namespace of the managed clusters. Define a ClusterRoleBinding resource that binds the group, my-awesome-app-admins , with the ClusterRole resource for the awesome-app-metrics-role . Your resource might resemble the following YAML: kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: awesome-app-metrics-role-binding subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: my-awesome-app-admins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: awesome-app-metrics-role After completing these steps, when the users in the my-awesome-app-admins log into the Grafana console, they have the following restrictions: Users see no data for dashboards that summarize fleet level data. Users can only select managed clusters and namespaces specified in the ClusterRole resource. To set up different types of user access, define separate ClusterRoles and ClusterRoleBindings resources to represent the different managed clusters in the namespaces. 1.4.13. Additional resources Refer to Prometheus configuration for more information. For more information about recording rules and alerting rules, refer to the recording rules and alerting rules from the Prometheus documentation . For more information about viewing the dashboard, see Using Grafana dashboards . See Exporting metrics to external endpoints . See Enabling monitoring for user-defined projects . See the Observability API . For information about updating the certificate for the alertmanager route, see Replacing certificates for alertmanager . For more details about observability alerts, see Observability alerts To learn more about alert forwarding, see the Prometheus Alertmanager documentation . See Observability alerts for more information. For more topics about the observability service, see Observability service . See Management Workload Partitioning for more information. 1.5. Using Observability Use the Observability service to view the utilization of clusters across your fleet. Required access: Cluster administrator Querying metrics using the Observability API Exporting metrics to external endpoints Viewing and exploring data 1.5.1. Querying metrics using the Observability API To access your endpoint by using the mutual TLS, which verifies the identities of both parties in a network connection, query metrics through the Red Hat OpenShift Container Platform rbac-query-proxy route with the Observability external API. Complete the following steps to get your queries for the rbac-query-proxy route: Get the details of the route with the following command: oc get route rbac-query-proxy -n open-cluster-management-observability To access the rbac-query-proxy route with your OpenShift Container Platform OAuth access token, run the following command to get the token. The token must be associated with a user or service account, which has permission to get namespaces: MY_TOKEN=USD(oc whoami --show-token) To access the openshift-ingress route, get the default CA certificate and store the content of the tls.crt key in a local file. Run the following command: oc -n openshift-ingress get secret router-certs-default -o jsonpath="{.data.tls\.crt}" | base64 -d > ca.crt To access an endpoint of the default CA certificate from your hub cluster through mutual TLS, which verifies the identities of both parties in a network connection, create the proxy-byo-ca TLS secret with the following command: oc -n open-cluster-management-observability create secret tls proxy-byo-ca --cert ./ca.crt --key ./ca.key To query metrics from the API, run the following command: curl --cacert ./ca.crt -H "Authorization: Bearer {TOKEN}" https://{PROXY_ROUTE_URL}/api/v1/query?query={QUERY_EXPRESSION} Note: The QUERY_EXPRESSION is the standard Prometheus query expression. For example, query the metrics cluster_infrastructure_provider by replacing the URL in the command with the following URL: https://{PROXY_ROUTE_URL}/api/v1/query?query=cluster_infrastructure_provider . For more details, see Querying Prometheus . 1.5.1.1. Exporting metrics to external endpoints To support the Prometheus Remote-Write specification in real time, export metrics to external endpoints. Complete the following steps to export metrics to external endpoints: Create the Kubernetes secret for an external endpoint with the access information of the external endpoint in the open-cluster-management-observability namespace. View the following example secret: apiVersion: v1 kind: Secret metadata: name: victoriametrics namespace: open-cluster-management-observability type: Opaque stringData: ep.yaml: | 1 url: http://victoriametrics:8428/api/v1/write 2 http_client_config: 3 basic_auth: 4 username: test 5 password: test 6 tls_config: 7 secret_name: 8 ca_file_key: 9 cert_file_key: 10 key_file_key: 11 insecure_skip_verify: 12 1 The ep.yaml parameter is the key of the content and is used in the MultiClusterObservability custom resource in step. Currently, Observability supports exporting metrics to endpoints without any security checks, with basic authentication or with tls enablement. View the following tables for a full list of supported parameters: 2 The url parameter is required and the URL for the external endpoint. Enter the value as a string. 3 The http_client_config parameter is optional and is the advanced configuration for the HTTP client. 4 The basic_auth parameter is optional and is the HTTP client configuration for basic authentication. 5 The username parameter is optional and is the user name for basic authorization. Enter the value as a string. 6 The password is optional and is the password for basic authorization. Enter the value as a string. 7 The tls_config parameter is optional and is the HTTP client configuration for TLS. 8 The secret_name parameter is required and is the name of the secret that contains certificates. Enter the value as a string. 9 The ca_file_key parameter is required and the key of the CA certificate in the secret. This parameter is only optional if the insecure_skip_verify parameter is set to true . Enter the value as a string. 10 The cert_file_key parameter is required and is the key of the client certificate in the secret. Enter the value as a string. 11 The key_file_key parameter is required and is the key of the client key in the secret. Enter the value as a string. 12 The insecure_skip_verify parameter is optional and used to skip the verification for target certificate. Enter the value as a boolean value. To add a list of external endpoints that you want to export, add the writeStorage parameter to the MultiClusterObservability custom resource. View the following example: spec: storageConfig: writeStorage: 1 - key: ep.yaml name: victoriametrics 1 Each item contains two attributes: name and key . Name is the name of the Kubernetes secret that contains endpoint access information, and key is the key of the content in the secret. If you add more than one item to the list, then the metrics are exported to multiple external endpoints. View the status of metric export after the metrics export is enabled by checking the acm_remote_write_requests_total metric. From the OpenShift Container Platform console of your hub cluster, navigate to the Metrics page by clicking Metrics in the Observe section. Then query the acm_remote_write_requests_total metric. The value of that metric is the total number of requests with a specific response for one external endpoint, on one observatorium API instance. The name label is the name for the external endpoint. The code label is the return code of the HTTP request for the metrics export. 1.5.2. Viewing and exploring data by using dashboards View the data from your managed clusters by accessing Grafana from the hub cluster. You can query specific alerts and add filters for the query. For example, to explore the cluster_infrastructure_provider alert from a single-node OpenShift cluster, use the following query expression: cluster_infrastructure_provider{clusterType="SNO"} Note: Do not set the ObservabilitySpec.resources.CPU.limits parameter if Observability is enabled on single node managed clusters. When you set the CPU limits, it causes the Observability pod to be counted against the capacity for your managed cluster. See the reference for Management Workload Partitioning in the Additional resources section. 1.5.2.1. Viewing historical data When you query historical data, manually set your query parameter options to control how much data is displayed from the dashboard. Complete the following steps: From your hub cluster, select the Grafana link that is in the console header. Edit your cluster dashboard by selecting Edit Panel . From the Query front-end data source in Grafana, click the Query tab. Select USDdatasource . If you want to see more data, increase the value of the Step parameter section. If the Step parameter section is empty, it is automatically calculated. Find the Custom query parameters field and select max_source_resolution=auto . To verify that the data is displayed, refresh your Grafana page. Your query data appears from the Grafana dashboard. 1.5.2.2. Viewing Red Hat Advanced Cluster Management dashboards When you enable the Red Hat Advanced Cluster Management Observability service, three dashboards become available. View the following dashboard descriptions: Alert Analysis : Overview dashboard of the alerts being generated within the managed cluster fleet. Clusters by Alert : Alert dashboard where you can filter by the alert name. Alerts by Cluster : Alert dashboard where you can filter by cluster, and view real-time data for alerts that are initiated or pending within the cluster environment. 1.5.2.3. Viewing the etcd table You can also view the etcd table from the hub cluster dashboard in Grafana to learn the stability of the etcd as a data store. Select the Grafana link from your hub cluster to view the etcd table data, which is collected from your hub cluster. The Leader election changes across managed clusters are displayed. 1.5.2.4. Viewing the Kubernetes API server dashboard To see the total number of clusters that are exceeding or meeting the targeted service-level objective (SLO) value for the past seven or 30-day period, offending and non-offending clusters, and API Server Request Duration, use the following options to view the Kubernetes API server dashboards: View the cluster fleet Kubernetes API service-level overview from the hub cluster dashboard in Grafana. Navigate to the Grafana dashboard. Access the managed dashboard menu by selecting Kubernetes > Service-Level Overview > API Server . The Fleet Overview and Top Cluster details are displayed. View the Kubernetes API service-level overview table from the hub cluster dashboard in Grafana to see the error budget for the past seven or 30-day period, the remaining downtime, and trend. Navigate to the Grafana dashboard from your hub cluster. Access the managed dashboard menu by selecting Kubernetes > Service-Level Overview > API Server . The Fleet Overview and Top Cluster details are displayed. 1.5.2.5. Viewing the OpenShift Virtualization dashboard You can view the Red Hat OpenShift Virtualization dashboard to see comprehensive insights for each cluster with the OpenShift Virtualization operator installed. The state of the operator is displayed, which is determined by active OpenShift Virtualization alerts and the conditions of the Hyperconverged Cluster Operator. Additionally, you view the number of running virtual machines and the operator version for each cluster. The dashboard also lists alerts affecting the health of the operator and separately includes all OpenShift Virtualization alerts, even those not impacting the health of the operator. You can filter the dashboard by cluster name, operator health alerts, health impact of alerts, and alert severity. 1.5.3. Additional resources For more information, see Prometheus Remote-Write specification . See Managing user-owned OAuth access tokens . Read Enabling the Observability service . For more topics, see Observability service . 1.5.4. Using Grafana dashboards Use Grafana dashboards to view hub cluster and managed cluster metrics. The data displayed in the Grafana alerts dashboard relies on alerts metrics, originating from managed clusters. The alerts metric does not affect managed clusters forwarding alerts to Red Hat Advanced Cluster Management alert manager on the hub cluster. Therefore, the metrics and alerts have distinct propagation mechanisms and follow separate code paths. Even if you see data in the Grafana alerts dashboard, that does not guarantee that the managed cluster alerts are successfully forwarding to the Red Hat Advanced Cluster Management hub cluster alert manager. If the metrics are propagated from the managed clusters, you can view the data displayed in the Grafana alerts dashboard. To use the Grafana dashboards for your development needs, complete the following: Setting up the Grafana developer instance Designing your Grafana dashboard Uninstalling the Grafana developer instance 1.5.4.1. Setting up the Grafana developer instance You can design your Grafana dashboard by creating a grafana-dev instance. Be sure to use the most current grafana-dev instance. Complete the following steps to set up the Grafana developer instance: Clone the open-cluster-management/multicluster-observability-operator/ repository, so that you are able to run the scripts that are in the tools folder. Run the setup-grafana-dev.sh to setup your Grafana instance. When you run the script the following resources are created: secret/grafana-dev-config , deployment.apps/grafana-dev , service/grafana-dev , ingress.extensions/grafana-dev , persistentvolumeclaim/grafana-dev : Switch the user role to Grafana administrator with the switch-to-grafana-admin.sh script. Select the Grafana URL, https:grafana-dev-open-cluster-management-observability.{OPENSHIFT_INGRESS_DOMAIN} , and log in. Then run the following command to add the switched user as Grafana administrator. For example, after you log in using kubeadmin , run following command: The Grafana developer instance is set up. 1.5.4.1.1. Verifying Grafana version Verify the Grafana version from the command line interface (CLI) or from the Grafana user interface. After you log in to your hub cluster, access the observabilty-grafana pod terminal. Run the following command: The Grafana version that is currently deployed within the cluster environment is displayed. Alternatively, you can navigate to the Manage tab in the Grafana dashboard. Scroll to the end of the page, where the version is listed. 1.5.4.2. Designing your Grafana dashboard After you set up the Grafana instance, you can design the dashboard. Complete the following steps to refresh the Grafana console and design your dashboard: From the Grafana console, create a dashboard by selecting the Create icon from the navigation panel. Select Dashboard , and then click Add new panel . From the New Dashboard/Edit Panel view, navigate to the Query tab. Configure your query by selecting Observatorium from the data source selector and enter a PromQL query. From the Grafana dashboard header, click the Save icon that is in the dashboard header. Add a descriptive name and click Save . 1.5.4.2.1. Designing your Grafana dashboard with a ConfigMap Design your Grafana dashboard with a ConfigMap. You can use the generate-dashboard-configmap-yaml.sh script to generate the dashboard ConfigMap, and to save the ConfigMap locally: If you do not have permissions to run the previously mentioned script, complete the following steps: Select a dashboard and click the Dashboard settings icon. Click the JSON Model icon from the navigation panel. Copy the dashboard JSON data and paste it in the data section. Modify the name and replace USDyour-dashboard-name . Enter a universally unique identifier (UUID) in the uid field in data.USDyour-dashboard-name.json.USDUSDyour_dashboard_json . You can use a program such as uuidegen to create a UUID. Your ConfigMap might resemble the following file: kind: ConfigMap apiVersion: v1 metadata: name: USDyour-dashboard-name namespace: open-cluster-management-observability labels: grafana-custom-dashboard: "true" data: USDyour-dashboard-name.json: |- USDyour_dashboard_json Notes: If your dashboard is created within the grafana-dev instance, you can take the name of the dashboard and pass it as an argument in the script. For example, a dashboard named Demo Dashboard is created in the grafana-dev instance. From the CLI, you can run the following script: After running the script, you might receive the following message: If your dashboard is not in the General folder, you can specify the folder name in the annotations section of this ConfigMap: After you complete your updates for the ConfigMap, you can install it to import the dashboard to the Grafana instance. Verify that the YAML file is created by applying the YAML from the CLI or OpenShift Container Platform console. A ConfigMap within the open-cluster-management-observability namespace is created. Run the following command from the CLI: From the OpenShift Container Platform console, create the ConfigMap using the demo-dashboard.yaml file. The dashboard is located in the Custom folder. 1.5.4.3. Uninstalling the Grafana developer instance When you uninstall the instance, the related resources are also deleted. Run the following command: 1.5.4.4. Additional resources See Exporting metrics to external endpoints . See uuidegen for instructions to create a UUID. See Using managed cluster labels in Grafana for more details. Return to the beginning of the page Using Grafana dashboard . For topics, see the Observing environments introduction . 1.5.5. Using managed cluster labels in Grafana Enable managed cluster labels to use them with Grafana dashboards. When observability is enabled in the hub cluster, the observability-managed-cluster-label-allowlist ConfigMap is created in the open-cluster-management-observability namespace. The ConfigMap contains a list of managed cluster labels maintained by the observabilty-rbac-query-proxy pod, to populate a list of label names to filter from within the ACM - Cluster Overview Grafana dashboard. By default, observability ignores a subset of labels in the observability-managed-cluster-label-allowlist ConfigMap. When a cluster is imported into the managed cluster fleet or modified, the observability-rbac-query-proxy pod watches for any changes in reference to the managed cluster labels and automatically updates the observability-managed-cluster-label-allowlist ConfigMap to reflect the changes. The ConfigMap contains only unique label names, which are either included in the ignore_labels or labels list. Your observability-managed-cluster-label-allowlist ConfigMap might resemble the following YAML file: data: managed_cluster.yaml: | ignore_labels: 1 - clusterID - cluster.open-cluster-management.io/clusterset - feature.open-cluster-management.io/addon-application-manager - feature.open-cluster-management.io/addon-cert-policy-controller - feature.open-cluster-management.io/addon-cluster-proxy - feature.open-cluster-management.io/addon-config-policy-controller - feature.open-cluster-management.io/addon-governance-policy-framework - feature.open-cluster-management.io/addon-iam-policy-controller - feature.open-cluster-management.io/addon-observability-controller - feature.open-cluster-management.io/addon-search-collector - feature.open-cluster-management.io/addon-work-manager - installer.name - installer.namespace - local-cluster - name labels: 2 - cloud - vendor + <1> Any label that is listed in the ignore_labels keylist of the ConfigMap is removed from the drop-down filter on the ACM - Clusters Overview Grafana dashboard. <2> The labels that are enabled are displayed in the drop-down filter on the ACM - Clusters Overview Grafana dashboard. The values are from the acm_managed_cluster_labels metric, depending on the label key value that is selected. Continue reading how to use managed cluster labels in Grafana: Adding managed cluster labels Enabling managed cluster labels Disabling managed cluster labels 1.5.5.1. Adding managed cluster labels When you add a managed cluster label to the observability-managed-cluster-label-allowlist ConfigMap, the label becomes available as a filter option in Grafana. Add a unique label to the hub cluster, or managed cluster object that is associated with the managed cluster fleet. For example, if you add the label, department=finance to a managed cluster, the ConfigMap is updated and might resemble the following changes: data: managed_cluster.yaml: | ignore_labels: - clusterID - cluster.open-cluster-management.io/clusterset - feature.open-cluster-management.io/addon-application-manager - feature.open-cluster-management.io/addon-cert-policy-controller - feature.open-cluster-management.io/addon-cluster-proxy - feature.open-cluster-management.io/addon-config-policy-controller - feature.open-cluster-management.io/addon-governance-policy-framework - feature.open-cluster-management.io/addon-iam-policy-controller - feature.open-cluster-management.io/addon-observability-controller - feature.open-cluster-management.io/addon-search-collector - feature.open-cluster-management.io/addon-work-manager - installer.name - installer.namespace - local-cluster - name labels: - cloud - department - vendor 1.5.5.2. Enabling managed cluster labels Enable a managed cluster label that is already disabled by removing the label from the ignore_labels list in the observability-managed-cluster-label-allowlist ConfigMap. For example, enable the local-cluster and name labels. Your observability-managed-cluster-label-allowlist ConfigMap might resemble the following content: data: managed_cluster.yaml: | ignore_labels: - clusterID - installer.name - installer.namespace labels: - cloud - vendor - local-cluster - name The ConfigMap resyncs after 30 seconds to ensure that the cluster labels are updated. After you update the ConfigMap, check the observability-rbac-query-proxy pod logs in the open-cluster-management-observability namespace to verify where the label is listed. The following information might be displayed in the pod log: From the Grafana dashboard, verify that the label is listed as a value in the Label drop-down menu. 1.5.5.3. Disabling managed cluster labels Exclude a managed cluster label from being listed in the Label drop-down filter. Add the label name to the ignore_labels list. For example, your YAML might resemble the following file if you add local-cluster and name back into the ignore_labels list: data: managed_cluster.yaml: | ignore_labels: - clusterID - installer.name - installer.namespace - local-cluster - name labels: - cloud - vendor Check the observability-rbac-query-proxy pod logs in the open-cluster-management-observability namespace to verify where the label is listed. The following information might be displayed in the pod log: 1.5.5.4. Additional resources See Using Grafana dashboards . Return to the beginning of the page, Using managed cluster labels in Grafana . 1.6. Managing alerts Receive and define alerts for the observability service to be notified of hub cluster and managed cluster changes. Configuring Alertmanager Forwarding alerts Silencing alerts Suppressing alerts 1.6.1. Configuring Alertmanager Integrate external messaging tools such as email, Slack, and PagerDuty to receive notifications from Alertmanager. You must override the alertmanager-config secret in the open-cluster-management-observability namespace to add integrations, and configure routes for Alertmanager. Complete the following steps to update the custom receiver rules: Extract the data from the alertmanager-config secret. Run the following command: oc -n open-cluster-management-observability get secret alertmanager-config --template='{{ index .data "alertmanager.yaml" }}' |base64 -d > alertmanager.yaml Edit and save the alertmanager.yaml file configuration by running the following command: oc -n open-cluster-management-observability create secret generic alertmanager-config --from-file=alertmanager.yaml --dry-run -o=yaml | oc -n open-cluster-management-observability replace secret --filename=- Your updated secret might resemble the following content: global smtp_smarthost: 'localhost:25' smtp_from: '[email protected]' smtp_auth_username: 'alertmanager' smtp_auth_password: 'password' templates: - '/etc/alertmanager/template/*.tmpl' route: group_by: ['alertname', 'cluster', 'service'] group_wait: 30s group_interval: 5m repeat_interval: 3h receiver: team-X-mails routes: - match_re: service: ^(foo1|foo2|baz)USD receiver: team-X-mails Your changes are applied immediately after it is modified. For an example of Alertmanager, see prometheus/alertmanager . 1.6.2. Forwarding alerts After you enable observability, alerts from your OpenShift Container Platform managed clusters are automatically sent to the hub cluster. You can use the alertmanager-config YAML file to configure alerts with an external notification system. View the following example of the alertmanager-config YAML file: global: slack_api_url: '<slack_webhook_url>' route: receiver: 'slack-notifications' group_by: [alertname, datacenter, app] receivers: - name: 'slack-notifications' slack_configs: - channel: '#alerts' text: 'https://internal.myorg.net/wiki/alerts/{{ .GroupLabels.app }}/{{ .GroupLabels.alertname }}' If you want to configure a proxy for alert forwarding, add the following global entry to the alertmanager-config YAML file: global: slack_api_url: '<slack_webhook_url>' http_config: proxy_url: http://**** 1.6.2.1. Disabling alert forwarding for managed clusters To disable alert forwarding for managed clusters, add the following annotation to the MultiClusterObservability custom resource: metadata: annotations: mco-disable-alerting: "true" When you set the annotation, the alert forwarding configuration on the managed clusters is reverted. Any changes made to the ocp-monitoring-config config map in the openshift-monitoring namespace are also reverted. Setting the annotation ensures that the ocp-monitoring-config config map is no longer managed or updated by the observability operator endpoint. After you update the configuration, the Prometheus instance on your managed cluster restarts. Important: Metrics on your managed cluster are lost if you have a Prometheus instance with a persistent volume for metrics, and the Prometheus instance restarts. Metrics from the hub cluster are not affected. When the changes are reverted, a ConfigMap named cluster-monitoring-reverted is created in the open-cluster-management-addon-observability namespace. Any new, manually added alert forward configurations are not reverted from the ConfigMap. Verify that the hub cluster alert manager is no longer propagating managed cluster alerts to third-party messaging tools. See the section, Configuring Alertmanager . 1.6.3. Silencing alerts Add alerts that you do not want to receive. You can silence alerts by the alert name, match label, or time duration. After you add the alert that you want to silence, an ID is created. Your ID for your silenced alert might resemble the following string, d839aca9-ed46-40be-84c4-dca8773671da . Continue reading for ways to silence alerts: To silence a Red Hat Advanced Cluster Management alert, you must have access to the alertmanager pods in the open-cluster-management-observability namespace. For example, enter the following command in the observability-alertmanager-0 pod terminal to silence SampleAlert : amtool silence add --alertmanager.url="http://localhost:9093" --author="user" --comment="Silencing sample alert" alertname="SampleAlert" Silence an alert by using multiple match labels. The following command uses match-label-1 and match-label-2 : amtool silence add --alertmanager.url="http://localhost:9093" --author="user" --comment="Silencing sample alert" <match-label-1>=<match-value-1> <match-label-2>=<match-value-2> If you want to silence an alert for a specific period of time, use the --duration flag. Run the following command to silence the SampleAlert for an hour: amtool silence add --alertmanager.url="http://localhost:9093" --author="user" --comment="Silencing sample alert" --duration="1h" alertname="SampleAlert" You can also specify a start or end time for the silenced alert. Enter the following command to silence the SampleAlert at a specific start time: amtool silence add --alertmanager.url="http://localhost:9093" --author="user" --comment="Silencing sample alert" --start="2023-04-14T15:04:05-07:00" alertname="SampleAlert" To view all silenced alerts that are created, run the following command: amtool silence --alertmanager.url="http://localhost:9093" If you no longer want an alert to be silenced, end the silencing of the alert by running the following command: amtool silence expire --alertmanager.url="http://localhost:9093" "d839aca9-ed46-40be-84c4-dca8773671da" To end the silencing of all alerts, run the following command: amtool silence expire --alertmanager.url="http://localhost:9093" USD(amtool silence query --alertmanager.url="http://localhost:9093" -q) 1.6.3.1. Migrating observability storage If you use alert silencers, you can migrate observability storage while retaining the silencers from its earlier state. To do this, migrate your Red Hat Advanced Cluster Management observability storage by creating new StatefulSets and PersistentVolumes (PV) resources that use your chosen StorageClass resource. Note: The storage for PVs is different from the object storage used to store the metrics collected from your clusters. When you use StatefulSets and PVs to migrate your observability data to new storage, it stores the following data components: Observatorium or Thanos: Receives data then uploads it to object storage. Some of its components store data in PVs. For this data, the Observatorium or Thanos automatically regenerates the object storage on a startup, so there is no consequence if you lose this data. Alertmanager: Only stores silenced alerts. If you want to keep these silenced alerts, you must migrate that data to the new PV. To migrate your observability storage, complete the following steps: In the MultiClusterObservability , set the .spec.storageConfig.storageClass field to the new storage class. To ensure the data of the earlier PersistentVolumes is retained even when you delete the PersistentVolumeClaim , go to all your existing PersistentVolumes . Change the reclaimPolicy to "Retain": `oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' . Optional: To avoid losing data, see Migrate persistent data to another Storage Class in DG 8 Operator in OCP 4 . Delete both the StatefulSet and the PersistentVolumeClaim in the following StatefulSet cases: alertmanager-db-observability-alertmanager-<REPLICA_NUMBER> data-observability-thanos-<COMPONENT_NAME> data-observability-thanos-receive-default data-observability-thanos-store-shard Important: You might need to delete, then re-create, the MultiClusterObservability operator pod so that you can create the new StatefulSet . Re-create a new PersistentVolumeClaim with the same name but the correct StorageClass . Create a new PersistentVolumeClaim referring to the old PersistentVolume . Verify that the new StatefulSet and PersistentVolumes use the new StorageClass that you chose. 1.6.4. Suppressing alerts Suppress Red Hat Advanced Cluster Management alerts across your clusters globally that are less severe. Suppress alerts by defining an inhibition rule in the alertmanager-config in the open-cluster-management-observability namespace. An inhibition rule mutes an alert when there is a set of parameter matches that match another set of existing matchers. In order for the rule to take effect, both the target and source alerts must have the same label values for the label names in the equal list. Your inhibit_rules might resemble the following: global: resolve_timeout: 1h inhibit_rules: 1 - equal: - namespace source_match: 2 severity: critical target_match_re: severity: warning|info 1 1 The inhibit_rules parameter section is defined to look for alerts in the same namespace. When a critical alert is initiated within a namespace and if there are any other alerts that contain the severity level warning or info in that namespace, only the critical alerts are routed to the Alertmanager receiver. The following alerts might be displayed when there are matches: 2 2 If the value of the source_match and target_match_re parameters do not match, the alert is routed to the receiver: To view suppressed alerts in Red Hat Advanced Cluster Management, enter the following command: amtool alert --alertmanager.url="http://localhost:9093" --inhibited 1.6.5. Additional resources See Customizing observability for more details. For more observability topics, see Observability service .
[ "create namespace open-cluster-management-observability", "DOCKER_CONFIG_JSON=`oc extract secret/multiclusterhub-operator-pull-secret -n open-cluster-management --to=-`", "DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`", "create secret generic multiclusterhub-operator-pull-secret -n open-cluster-management-observability --from-literal=.dockerconfigjson=\"USDDOCKER_CONFIG_JSON\" --type=kubernetes.io/dockerconfigjson", "create -f thanos-object-storage.yaml -n open-cluster-management-observability", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: YOUR_S3_BUCKET endpoint: YOUR_S3_ENDPOINT 1 insecure: true access_key: YOUR_ACCESS_KEY secret_key: YOUR_SECRET_KEY", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: GCS config: bucket: YOUR_GCS_BUCKET service_account: YOUR_SERVICE_ACCOUNT", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: AZURE config: storage_account: YOUR_STORAGE_ACCT storage_account_key: YOUR_STORAGE_KEY container: YOUR_CONTAINER endpoint: blob.core.windows.net 1 max_retries: 0", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: YOUR_RH_DATA_FOUNDATION_BUCKET endpoint: YOUR_RH_DATA_FOUNDATION_ENDPOINT 1 insecure: false access_key: YOUR_RH_DATA_FOUNDATION_ACCESS_KEY secret_key: YOUR_RH_DATA_FOUNDATION_SECRET_KEY", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: YOUR_ROKS_S3_BUCKET endpoint: YOUR_ROKS_S3_ENDPOINT 1 insecure: true access_key: YOUR_ROKS_ACCESS_KEY secret_key: YOUR_ROKS_SECRET_KEY", "export POLICY_VERSION=USD(date +\"%m-%d-%y\") export TRUST_POLICY_VERSION=USD(date +\"%m-%d-%y\") export CLUSTER_NAME=<my-cluster> export S3_BUCKET=USDCLUSTER_NAME-acm-observability export REGION=us-east-2 export NAMESPACE=open-cluster-management-observability export SA=tbd export SCRATCH_DIR=/tmp/scratch export OIDC_PROVIDER=USD(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer| sed -e \"s/^https:\\/\\///\") export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export AWS_PAGER=\"\" rm -rf USDSCRATCH_DIR mkdir -p USDSCRATCH_DIR", "aws s3 mb s3://USDS3_BUCKET", "{ \"Version\": \"USDPOLICY_VERSION\", \"Statement\": [ { \"Sid\": \"Statement\", \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:PutObjectAcl\", \"s3:CreateBucket\", \"s3:DeleteBucket\" ], \"Resource\": [ \"arn:aws:s3:::USDS3_BUCKET/*\", \"arn:aws:s3:::USDS3_BUCKET\" ] } ] }", "S3_POLICY=USD(aws iam create-policy --policy-name USDCLUSTER_NAME-acm-obs --policy-document file://USDSCRATCH_DIR/s3-policy.json --query 'Policy.Arn' --output text) echo USDS3_POLICY", "{ \"Version\": \"USDTRUST_POLICY_VERSION\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:observability-thanos-query\", \"system:serviceaccount:USD{NAMESPACE}:observability-thanos-store-shard\", \"system:serviceaccount:USD{NAMESPACE}:observability-thanos-compact\" \"system:serviceaccount:USD{NAMESPACE}:observability-thanos-rule\", \"system:serviceaccount:USD{NAMESPACE}:observability-thanos-receive\", ] } } } ] }", "S3_ROLE=USD(aws iam create-role --role-name \"USDCLUSTER_NAME-acm-obs-s3\" --assume-role-policy-document file://USDSCRATCH_DIR/TrustPolicy.json --query \"Role.Arn\" --output text) echo USDS3_ROLE", "aws iam attach-role-policy --role-name \"USDCLUSTER_NAME-acm-obs-s3\" --policy-arn USDS3_POLICY", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: USDS3_BUCKET endpoint: s3.USDREGION.amazonaws.com signature_version2: false", "YOUR_CLOUD_PROVIDER_ACCESS_KEY=USD(oc -n open-cluster-management-observability get secret <object-storage-secret> -o jsonpath=\"{.data.thanos\\.yaml}\" | base64 --decode | grep access_key | awk '{print USD2}')", "echo USDYOUR_CLOUD_PROVIDER_ACCESS_KEY", "YOUR_CLOUD_PROVIDER_SECRET_KEY=USD(oc -n open-cluster-management-observability get secret <object-storage-secret> -o jsonpath=\"{.data.thanos\\.yaml}\" | base64 --decode | grep secret_key | awk '{print USD2}')", "echo USDSECRET_KEY", "observability-thanos-query (deployment) observability-thanos-compact (statefulset) observability-thanos-receive-default (statefulset) observability-thanos-rule (statefulset) observability-thanos-store-shard-x (statefulsets)", "apiVersion: observability.open-cluster-management.io/v1beta2 kind: MultiClusterObservability metadata: name: observability spec: observabilityAddonSpec: {} storageConfig: metricObjectStorage: name: thanos-object-storage key: thanos.yaml", "spec: advanced: compact: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE store: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE rule: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE receive: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE query: serviceAccountAnnotations: eks.amazonaws.com/role-arn: USDS3_ROLE", "nodeSelector: node-role.kubernetes.io/infra: \"\"", "apply -f multiclusterobservability_cr.yaml", "get deploy multicluster-observability-operator -n open-cluster-management --show-labels NAME READY UP-TO-DATE AVAILABLE AGE LABELS multicluster-observability-operator 1/1 1 1 35m installer.name=multiclusterhub,installer.namespace=open-cluster-management", "labels: installer.name: multiclusterhub installer.namespace: open-cluster-management", "thanos --version", "spec: imagePullPolicy: Always imagePullSecret: multiclusterhub-operator-pull-secret observabilityAddonSpec: # The ObservabilityAddonSpec defines the global settings for all managed clusters which have observability add-on enabled enableMetrics: false #indicates the observability addon push metrics to hub server", "data: custom_rules.yaml: | groups: - name: cluster-health rules: - alert: ClusterCPUHealth-jb annotations: summary: Notify when CPU utilization on a cluster is greater than the defined utilization limit description: \"The cluster has a high CPU usage: {{ USDvalue }} core for {{ USDlabels.cluster }} {{ USDlabels.clusterID }}.\" expr: | max(cluster:cpu_usage_cores:sum) by (clusterID, cluster, prometheus) > 0 for: 5s labels: cluster: \"{{ USDlabels.cluster }}\" prometheus: \"{{ USDlabels.prometheus }}\" severity: critical", "data: custom_rules.yaml: | groups: - name: container-memory rules: - record: pod:container_memory_cache:sum expr: sum(container_memory_cache{pod!=\"\"}) BY (pod, container)", "get mco observability -o yaml", "Observability components are deployed and running", "apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist data: metrics_list.yaml: | names: 1 - node_memory_MemTotal_bytes rules: 2 - record: apiserver_request_duration_seconds:histogram_quantile_90 expr: histogram_quantile(0.90,sum(rate(apiserver_request_duration_seconds_bucket{job=\\\"apiserver\\\", verb!=\\\"WATCH\\\"}[5m])) by (verb,le))", "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist namespace: test data: uwl_metrics_list.yaml: 1 names: 2 - sample_metrics", "get mco observability -o yaml", "-cluster_infrastructure_provider", "apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml", "edit mco observability -o yaml", "spec: advanced: retentionConfig: blockDuration: 2h deleteDelay: 48h retentionInLocal: 24h retentionResolutionRaw: 365d retentionResolution5m: 365d retentionResolution1h: 365d receive: resources: limits: memory: 4096Gi replicas: 3", "edit mco observability", "spec: advanced: retentionConfig: retentionResolutionRaw: 365d retentionResolution5m: 365d retentionResolution1h: 365d", "collect_rules: - group: SNOResourceUsage annotations: description: > By default, a {sno} cluster does not collect pod and container resource metrics. Once a {sno} cluster reaches a level of resource consumption, these granular metrics are collected dynamically. When the cluster resource consumption is consistently less than the threshold for a period of time, collection of the granular metrics stops. selector: matchExpressions: - key: clusterType operator: In values: [\"{sno}\"] rules: - collect: SNOHighCPUUsage annotations: description: > Collects the dynamic metrics specified if the cluster cpu usage is constantly more than 70% for 2 minutes expr: (1 - avg(rate(node_cpu_seconds_total{mode=\\\"idle\\\"}[5m]))) * 100 > 70 for: 2m dynamic_metrics: names: - container_cpu_cfs_periods_total - container_cpu_cfs_throttled_periods_total - kube_pod_container_resource_limits - kube_pod_container_resource_requests - namespace_workload_pod:kube_pod_owner:relabel - node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate - node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate - collect: SNOHighMemoryUsage annotations: description: > Collects the dynamic metrics specified if the cluster memory usage is constantly more than 70% for 2 minutes expr: (1 - sum(:node_memory_MemAvailable_bytes:sum) / sum(kube_node_status_allocatable{resource=\\\"memory\\\"})) * 100 > 70 for: 2m dynamic_metrics: names: - kube_pod_container_resource_limits - kube_pod_container_resource_requests - namespace_workload_pod:kube_pod_owner:relabel matches: - __name__=\"container_memory_cache\",container!=\"\" - __name__=\"container_memory_rss\",container!=\"\" - __name__=\"container_memory_swap\",container!=\"\" - __name__=\"container_memory_working_set_bytes\",container!=\"\"", "collect_rules: - group: -SNOResourceUsage", "spec: advanced: receive: replicas: 6", "create secret generic <tls_secret_name> --from-file=ca.crt=<path_to_file> -n open-cluster-management-observability", "apiVersion: v1 kind: Secret metadata: name: <tls_secret_name> namespace: open-cluster-management-observability type: Opaque data: ca.crt: <base64_encoded_ca_certificate>", "edit mco observability -o yaml", "metricObjectStorage: key: thanos.yaml name: thanos-object-storage tlsSecretName: tls-certs-secret 1 tlsSecretMountPath: /etc/minio/certs 2", "thanos.yaml: | type: s3 config: bucket: \"thanos\" endpoint: \"minio:9000\" insecure: false 1 access_key: \"minio\" secret_key: \"minio123\" http_config: tls_config: ca_file: /etc/minio/certs/ca.crt 2 insecure_skip_verify: false", "thanos.yaml: | type: s3 config: bucket: \"thanos\" endpoint: \"minio:9000\" insecure: false access_key: \"minio\" secret_key: \"minio123\" http_config: tls_config: ca_file: /etc/minio/certs/ca.crt 1 cert_file: /etc/minio/certs/public.crt key_file: /etc/minio/certs/private.key insecure_skip_verify: false", "-n open-cluster-management-observability get pods -l app.kubernetes.io/name=thanos-store", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: <addon-deploy-config-name> namespace: <managed-cluster-name> spec: agentInstallNamespace: open-cluster-managment-addon-observability proxyConfig: httpsProxy: \"http://<username>:<password>@<ip>:<port>\" 1 noProxy: \".cluster.local,.svc,172.30.0.1\" 2", "-n default describe svc kubernetes | grep IP:", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: observability-controller namespace: <managed-cluster-name> spec: installNamespace: open-cluster-managment-addon-observability configs: - group: addon.open-cluster-management.io resource: AddonDeploymentConfig name: <addon-deploy-config-name> namespace: <managed-cluster-name>", "spec: advanced: customObservabilityHubURL: <yourURL> customAlertmanagerHubURL: <yourURL>", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: proxy-observatorium-api namespace: open-cluster-management-observability spec: host: <intermediate_component_url> port: targetPort: public tls: insecureEdgeTerminationPolicy: None termination: passthrough to: kind: Service name: observability-observatorium-api weight: 100 wildcardPolicy: None", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: alertmanager-proxy namespace: open-cluster-management-observability spec: host: <intermediate_component_url> path: /api/v2 port: targetPort: oauth-proxy tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt to: kind: Service name: alertmanager weight: 100 wildcardPolicy: None", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: awesome-app-metrics-role rules: - apiGroups: - \"cluster.open-cluster-management.io\" resources: - managedclusters: 1 resourceNames: 2 - devcluster1 - devcluster2 verbs: 3 - metrics/AwesomeAppNS", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: awesome-app-metrics-role-binding subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: my-awesome-app-admins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: awesome-app-metrics-role", "get route rbac-query-proxy -n open-cluster-management-observability", "MY_TOKEN=USD(oc whoami --show-token)", "-n openshift-ingress get secret router-certs-default -o jsonpath=\"{.data.tls\\.crt}\" | base64 -d > ca.crt", "-n open-cluster-management-observability create secret tls proxy-byo-ca --cert ./ca.crt --key ./ca.key", "curl --cacert ./ca.crt -H \"Authorization: Bearer {TOKEN}\" https://{PROXY_ROUTE_URL}/api/v1/query?query={QUERY_EXPRESSION}", "apiVersion: v1 kind: Secret metadata: name: victoriametrics namespace: open-cluster-management-observability type: Opaque stringData: ep.yaml: | 1 url: http://victoriametrics:8428/api/v1/write 2 http_client_config: 3 basic_auth: 4 username: test 5 password: test 6 tls_config: 7 secret_name: 8 ca_file_key: 9 cert_file_key: 10 key_file_key: 11 insecure_skip_verify: 12", "spec: storageConfig: writeStorage: 1 - key: ep.yaml name: victoriametrics", "./setup-grafana-dev.sh --deploy secret/grafana-dev-config created deployment.apps/grafana-dev created service/grafana-dev created serviceaccount/grafana-dev created clusterrolebinding.rbac.authorization.k8s.io/open-cluster-management:grafana-crb-dev created route.route.openshift.io/grafana-dev created persistentvolumeclaim/grafana-dev created oauthclient.oauth.openshift.io/grafana-proxy-client-dev created deployment.apps/grafana-dev patched service/grafana-dev patched route.route.openshift.io/grafana-dev patched oauthclient.oauth.openshift.io/grafana-proxy-client-dev patched clusterrolebinding.rbac.authorization.k8s.io/open-cluster-management:grafana-crb-dev patched", "./switch-to-grafana-admin.sh kube:admin User <kube:admin> switched to be grafana admin", "grafana-cli", "./generate-dashboard-configmap-yaml.sh \"Your Dashboard Name\" Save dashboard <your-dashboard-name> to ./your-dashboard-name.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: USDyour-dashboard-name namespace: open-cluster-management-observability labels: grafana-custom-dashboard: \"true\" data: USDyour-dashboard-name.json: |- USDyour_dashboard_json", "./generate-dashboard-configmap-yaml.sh \"Demo Dashboard\"", "Save dashboard <demo-dashboard> to ./demo-dashboard.yaml", "annotations: observability.open-cluster-management.io/dashboard-folder: Custom", "apply -f demo-dashboard.yaml", "./setup-grafana-dev.sh --clean secret \"grafana-dev-config\" deleted deployment.apps \"grafana-dev\" deleted serviceaccount \"grafana-dev\" deleted route.route.openshift.io \"grafana-dev\" deleted persistentvolumeclaim \"grafana-dev\" deleted oauthclient.oauth.openshift.io \"grafana-proxy-client-dev\" deleted clusterrolebinding.rbac.authorization.k8s.io \"open-cluster-management:grafana-crb-dev\" deleted", "data: managed_cluster.yaml: | ignore_labels: 1 - clusterID - cluster.open-cluster-management.io/clusterset - feature.open-cluster-management.io/addon-application-manager - feature.open-cluster-management.io/addon-cert-policy-controller - feature.open-cluster-management.io/addon-cluster-proxy - feature.open-cluster-management.io/addon-config-policy-controller - feature.open-cluster-management.io/addon-governance-policy-framework - feature.open-cluster-management.io/addon-iam-policy-controller - feature.open-cluster-management.io/addon-observability-controller - feature.open-cluster-management.io/addon-search-collector - feature.open-cluster-management.io/addon-work-manager - installer.name - installer.namespace - local-cluster - name labels: 2 - cloud - vendor", "data: managed_cluster.yaml: | ignore_labels: - clusterID - cluster.open-cluster-management.io/clusterset - feature.open-cluster-management.io/addon-application-manager - feature.open-cluster-management.io/addon-cert-policy-controller - feature.open-cluster-management.io/addon-cluster-proxy - feature.open-cluster-management.io/addon-config-policy-controller - feature.open-cluster-management.io/addon-governance-policy-framework - feature.open-cluster-management.io/addon-iam-policy-controller - feature.open-cluster-management.io/addon-observability-controller - feature.open-cluster-management.io/addon-search-collector - feature.open-cluster-management.io/addon-work-manager - installer.name - installer.namespace - local-cluster - name labels: - cloud - department - vendor", "data: managed_cluster.yaml: | ignore_labels: - clusterID - installer.name - installer.namespace labels: - cloud - vendor - local-cluster - name", "enabled managedcluster labels: <label>", "data: managed_cluster.yaml: | ignore_labels: - clusterID - installer.name - installer.namespace - local-cluster - name labels: - cloud - vendor", "disabled managedcluster label: <label>", "-n open-cluster-management-observability get secret alertmanager-config --template='{{ index .data \"alertmanager.yaml\" }}' |base64 -d > alertmanager.yaml", "-n open-cluster-management-observability create secret generic alertmanager-config --from-file=alertmanager.yaml --dry-run -o=yaml | oc -n open-cluster-management-observability replace secret --filename=-", "global smtp_smarthost: 'localhost:25' smtp_from: '[email protected]' smtp_auth_username: 'alertmanager' smtp_auth_password: 'password' templates: - '/etc/alertmanager/template/*.tmpl' route: group_by: ['alertname', 'cluster', 'service'] group_wait: 30s group_interval: 5m repeat_interval: 3h receiver: team-X-mails routes: - match_re: service: ^(foo1|foo2|baz)USD receiver: team-X-mails", "global: slack_api_url: '<slack_webhook_url>' route: receiver: 'slack-notifications' group_by: [alertname, datacenter, app] receivers: - name: 'slack-notifications' slack_configs: - channel: '#alerts' text: 'https://internal.myorg.net/wiki/alerts/{{ .GroupLabels.app }}/{{ .GroupLabels.alertname }}'", "global: slack_api_url: '<slack_webhook_url>' http_config: proxy_url: http://****", "metadata: annotations: mco-disable-alerting: \"true\"", "amtool silence add --alertmanager.url=\"http://localhost:9093\" --author=\"user\" --comment=\"Silencing sample alert\" alertname=\"SampleAlert\"", "amtool silence add --alertmanager.url=\"http://localhost:9093\" --author=\"user\" --comment=\"Silencing sample alert\" <match-label-1>=<match-value-1> <match-label-2>=<match-value-2>", "amtool silence add --alertmanager.url=\"http://localhost:9093\" --author=\"user\" --comment=\"Silencing sample alert\" --duration=\"1h\" alertname=\"SampleAlert\"", "amtool silence add --alertmanager.url=\"http://localhost:9093\" --author=\"user\" --comment=\"Silencing sample alert\" --start=\"2023-04-14T15:04:05-07:00\" alertname=\"SampleAlert\"", "amtool silence --alertmanager.url=\"http://localhost:9093\"", "amtool silence expire --alertmanager.url=\"http://localhost:9093\" \"d839aca9-ed46-40be-84c4-dca8773671da\"", "amtool silence expire --alertmanager.url=\"http://localhost:9093\" USD(amtool silence query --alertmanager.url=\"http://localhost:9093\" -q)", "global: resolve_timeout: 1h inhibit_rules: 1 - equal: - namespace source_match: 2 severity: critical target_match_re: severity: warning|info", "ALERTS{alertname=\"foo\", namespace=\"ns-1\", severity=\"critical\"} ALERTS{alertname=\"foo\", namespace=\"ns-1\", severity=\"warning\"}", "ALERTS{alertname=\"foo\", namespace=\"ns-1\", severity=\"critical\"} ALERTS{alertname=\"foo\", namespace=\"ns-2\", severity=\"warning\"}", "amtool alert --alertmanager.url=\"http://localhost:9093\" --inhibited" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/observability/observing-environments-intro
Getting started
Getting started OpenShift Container Platform 4.14 Getting started in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "/ws/data/load", "Items inserted in database: 2893", "oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify", "oc login <https://api.your-openshift-server.com> --token=<tokenID>", "oc login <cluster_url> --web", "oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"", "Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".", "oc adm policy add-role-to-user view -z default -n user-getting-started", "oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'", "--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success", "oc get service", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s", "oc create route edge parksmap --service=parksmap", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s", "oc describe pods", "Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap", "oc scale --current-replicas=1 --replicas=2 deployment/parksmap", "deployment.apps/parksmap scaled", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s", "oc scale --current-replicas=2 --replicas=1 deployment/parksmap", "oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true", "--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi9\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success", "oc create route edge nationalparks --service=nationalparks", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc new-app quay.io/centos7/mongodb-36-centos7:master --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'", "--> Found container image dc18f52 (3 years old) from quay.io for \"quay.io/centos7/mongodb-36-centos7:master\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:master\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success", "oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb", "secret/nationalparks-mongodb-parameters created", "oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks", "deployment.apps/nationalparks updated", "oc rollout status deployment nationalparks", "deployment \"nationalparks\" successfully rolled out", "oc rollout status deployment mongodb-nationalparks", "deployment \"mongodb-nationalparks\" successfully rolled out", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load", "\"Items inserted in database: 2893\"", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all", ", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]", "oc label route nationalparks type=parksmap-backend", "route.route.openshift.io/nationalparks labeled", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/getting_started/index
Chapter 3. Providing feedback on OpenShift Container Platform documentation
Chapter 3. Providing feedback on OpenShift Container Platform documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click one of the following links: To create a Jira issue for OpenShift Container Platform To create a Jira issue for OpenShift Virtualization Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Click Create to create the issue.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/about/providing-feedback-on-red-hat-documentation
Chapter 1. Support policy for Cryostat
Chapter 1. Support policy for Cryostat Red Hat supports a major version of Cryostat for a minimum of 6 months. Red Hat bases this figure on the time that the product gets released on the Red Hat Customer Portal. You can install and deploy Cryostat on Red Hat OpenShift Container Platform 4.8 or a later version that runs on an x86_64 architecture. Additional resources For more information about the Cryostat life cycle policy, see Red Hat build of Cryostat on the Red Hat OpenShift Container Platform Life Cycle Policy web page.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.1/cryostat-support-policy_cryostat
Appendix B. iSCSI Disks
Appendix B. iSCSI Disks Internet Small Computer System Interface (iSCSI) is a protocol that allows computers to communicate with storage devices by SCSI requests and responses carried over TCP/IP. Because iSCSI is based on the standard SCSI protocols, it uses some terminology from SCSI. The device on the SCSI bus to which requests get sent, and which answers these requests, is known as the target and the device issuing requests is known as the initiator . In other words, an iSCSI disk is a target and the iSCSI software equivalent of a SCSI controller or SCSI Host Bus Adapter (HBA) is called an initiator. This appendix only covers Linux as an iSCSI initiator; how Linux uses iSCSI disks, but not how Linux hosts iSCSI disks. Linux has a software iSCSI initiator in the kernel that takes the place and form of a SCSI HBA driver and therefore allows Linux to use iSCSI disks. However, as iSCSI is a fully network-based protocol, iSCSI initiator support requires more than just the ability to send SCSI packets over the network. Before Linux can use an iSCSI target, Linux must find the target on the network and make a connection to it. In some cases, Linux must send authentication information to gain access to the target. Linux must also detect any failure of the network connection and must establish a new connection, including logging in again if necessary. The discovery, connection, and logging in is handled in user space by the iscsiadm utility, while errors are handled, also in user space, by the iscsid utility. Both iscsiadm and iscsid are part of the iscsi-initiator-utils package under Red Hat Enterprise Linux. B.1. iSCSI Disks in Anaconda The Anaconda installation program can discover and log in to iSCSI disks in two ways: When Anaconda starts, it checks if the BIOS or add-on boot ROMs of the system support iSCSI Boot Firmware Table (iBFT), a BIOS extension for systems which can boot from iSCSI. If the BIOS supports iBFT, Anaconda will read the iSCSI target information for the configured boot disk from the BIOS and log in to this target, making it available as an installation target. Important To connect automatically to an iSCSI target, a network device for accessing the target needs to be activated. The recommended way to do so is to use ip=ibft boot option. You can discover and add iSCSI targets manually in the graphical user interface in anaconda . From the main menu, the Installation Summary screen, click the Installation Destination option. Then click the Add a disk in the Specialized & Network Disks section of the screen. A tabbed list of available storage devices appears. In the lower right corner, click the Add iSCSI Target button and proceed with the discovery process. See Section 8.15.1, "The Storage Devices Selection Screen" for more information. Important Restriction: The /boot partition cannot be placed on iSCSI targets that have been manually added using this method - an iSCSI target containing a /boot partition must be configured for use with iBFT. However, in instances where the installed system is expected to boot from iSCSI with iBFT configuration provided by a method other than firmware iBFT, for example using iPXE, the /boot partition restriction can be disabled using the inst.nonibftiscsiboot installer boot option. While Anaconda uses iscsiadm to find and log into iSCSI targets, iscsiadm automatically stores any information about these targets in the iscsiadm iSCSI database. Anaconda then copies this database to the installed system and marks any iSCSI targets not used for / so that the system will automatically log in to them when it starts. If / is placed on an iSCSI target, initrd will log into this target and Anaconda does not include this target in start up scripts to avoid multiple attempts to log into the same target.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/appe-iscsi-disks
Chapter 19. Using the partition reassignment tool
Chapter 19. Using the partition reassignment tool When scaling a Kafka cluster, you may need to add or remove brokers and update the distribution of partitions or the replication factor of topics. To update partitions and topics, you can use the kafka-reassign-partitions.sh tool. Neither the AMQ Streams Cruise Control integration nor the Topic Operator support changing the replication factor of a topic. However, you can change the replication factor of a topic using the kafka-reassign-partitions.sh tool. The tool can also be used to reassign partitions and balance the distribution of partitions across brokers to improve performance. However, it is recommended to use Cruise Control for automated partition reassignments and cluster rebalancing . Cruise Control can move topics from one broker to another without any downtime, and it is the most efficient way to reassign partitions. It is recommended to run the kafka-reassign-partitions.sh tool as a separate interactive pod rather than within the broker container. Running the Kafka bin/ scripts within the broker container may cause a JVM to start with the same settings as the Kafka broker, which can potentially cause disruptions. By running the kafka-reassign-partitions.sh tool in a separate pod, you can avoid this issue. Running a pod with the -ti option creates an interactive pod with a terminal for running shell commands inside the pod. Running an interactive pod with a terminal oc run helper-pod -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- bash 19.1. Partition reassignment tool overview The partition reassignment tool provides the following capabilities for managing Kafka partitions and brokers: Redistributing partition replicas Scale your cluster up and down by adding or removing brokers, and move Kafka partitions from heavily loaded brokers to under-utilized brokers. To do this, you must create a partition reassignment plan that identifies which topics and partitions to move and where to move them. Cruise Control is recommended for this type of operation as it automates the cluster rebalancing process . Scaling topic replication factor up and down Increase or decrease the replication factor of your Kafka topics. To do this, you must create a partition reassignment plan that identifies the existing replication assignment across partitions and an updated assignment with the replication factor changes. Changing the preferred leader Change the preferred leader of a Kafka partition. This can be useful if the current preferred leader is unavailable or if you want to redistribute load across the brokers in the cluster. To do this, you must create a partition reassignment plan that specifies the new preferred leader for each partition by changing the order of replicas. Changing the log directories to use a specific JBOD volume Change the log directories of your Kafka brokers to use a specific JBOD volume. This can be useful if you want to move your Kafka data to a different disk or storage device. To do this, you must create a partition reassignment plan that specifies the new log directory for each topic. 19.1.1. Generating a partition reassignment plan The partition reassignment tool ( kafka-reassign-partitions.sh ) works by generating a partition assignment plan that specifies which partitions should be moved from their current broker to a new broker. If you are satisfied with the plan, you can execute it. The tool then does the following: Migrates the partition data to the new broker Updates the metadata on the Kafka brokers to reflect the new partition assignments Triggers a rolling restart of the Kafka brokers to ensure that the new assignments take effect The partition reassignment tool has three different modes: --generate Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. Because this operates on whole topics, it cannot be used when you only want to reassign some partitions of some topics. --execute Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers that gain partitions as a result become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR (in-sync replicas) the old broker will stop being a follower and will delete its replica. --verify Using the same reassignment JSON file as the --execute step, --verify checks whether all the partitions in the file have been moved to their intended brokers. If the reassignment is complete, --verify also removes any traffic throttles ( --throttle ) that are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished. It is only possible to have one reassignment running in a cluster at any given time, and it is not possible to cancel a running reassignment. If you must cancel a reassignment, wait for it to complete and then perform another reassignment to revert the effects of the first reassignment. The kafka-reassign-partitions.sh will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment. 19.1.2. Specifying topics in a partition reassignment JSON file The kafka-reassign-partitions.sh tool uses a reassignment JSON file that specifies the topics to reassign. You can generate a reassignment JSON file or create a file manually if you want to move specific partitions. A basic reassignment JSON file has the structure presented in the following example, which describes three partitions belonging to two Kafka topics. Each partition is reassigned to a new set of replicas, which are identified by their broker IDs. The version , topic , partition , and replicas properties are all required. Example partition reassignment JSON file structure 1 The version of the reassignment JSON file format. Currently, only version 1 is supported, so this should always be 1. 2 An array that specifies the partitions to be reassigned. 3 The name of the Kafka topic that the partition belongs to. 4 The ID of the partition being reassigned. 5 An ordered array of the IDs of the brokers that should be assigned as replicas for this partition. The first broker in the list is the leader replica. Note Partitions not included in the JSON are not changed. If you specify only topics using a topics array, the partition reassignment tool reassigns all the partitions belonging to the specified topics. Example reassignment JSON file structure for reassigning all partitions for a topic 19.1.3. Reassigning partitions between JBOD volumes When using JBOD storage in your Kafka cluster, you can reassign the partitions between specific volumes and their log directories (each volume has a single log directory). To reassign a partition to a specific volume, add log_dirs values for each partition in the reassignment JSON file. Each log_dirs array contains the same number of entries as the replicas array, since each replica should be assigned to a specific log directory. The log_dirs array contains either an absolute path to a log directory or the special value any . The any value indicates that Kafka can choose any available log directory for that replica, which can be useful when reassigning partitions between JBOD volumes. Example reassignment JSON file structure with log directories 19.1.4. Throttling partition reassignment Partition reassignment can be a slow process because it involves transferring large amounts of data between brokers. To avoid a detrimental impact on clients, you can throttle the reassignment process. Use the --throttle parameter with the kafka-reassign-partitions.sh tool to throttle a reassignment. You specify a maximum threshold in bytes per second for the movement of partitions between brokers. For example, --throttle 5000000 sets a maximum threshold for moving partitions of 50 MBps. Throttling might cause the reassignment to take longer to complete. If the throttle is too low, the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete. If the throttle is too high, clients will be impacted. For example, for producers, this could manifest as higher than normal latency waiting for acknowledgment. For consumers, this could manifest as a drop in throughput caused by higher latency between polls. 19.2. Generating a reassignment JSON file to reassign partitions Generate a reassignment JSON file with the kafka-reassign-partitions.sh tool to reassign partitions after scaling a Kafka cluster. Adding or removing brokers does not automatically redistribute the existing partitions. To balance the partition distribution and take full advantage of the new brokers, you can reassign the partitions using the kafka-reassign-partitions.sh tool. You run the tool from an interactive pod container connected to the Kafka cluster. The following procedure describes a secure reassignment process that uses mTLS. You'll need a Kafka cluster that uses TLS encryption and mTLS authentication. You'll need the following to establish a connection: The cluster CA certificate and password generated by the Cluster Operator when the Kafka cluster is created The user CA certificate and password generated by the User Operator when a user is created for client access to the Kafka cluster In this procedure, the CA certificates and corresponding passwords are extracted from the cluster and user secrets that contain them in PKCS #12 ( .p12 and .password ) format. The passwords allow access to the .p12 stores that contain the certificates. You use the .p12 stores to specify a truststore and keystore to authenticate connection to the Kafka cluster. Prerequisites You have a running Cluster Operator. You have a running Kafka cluster based on a Kafka resource configured with internal TLS encryption and mTLS authentication. Kafka configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: # ... - name: tls port: 9093 type: internal tls: true 1 authentication: type: tls 2 # ... 1 Enables TLS encryption for the internal listener. 2 Listener authentication mechanism specified as mutual tls . The running Kafka cluster contains a set of topics and partitions to reassign. Example topic configuration for my-topic apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 3 config: retention.ms: 7200000 segment.bytes: 1073741824 # ... You have a KafkaUser configured with ACL rules that specify permission to produce and consume topics from the Kafka brokers. Example Kafka user configuration with ACL rules to allow operations on my-topic and my-cluster apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: # access to the topic - resource: type: topic name: my-topic operations: - Create - Describe - Read - AlterConfigs host: "*" # access to the cluster - resource: type: cluster operations: - Alter - AlterConfigs host: "*" # ... # ... 1 User authentication mechanism defined as mutual tls . 2 Simple authorization and accompanying list of ACL rules. Procedure Extract the cluster CA certificate and password from the <cluster_name> -cluster-ca-cert secret of the Kafka cluster. oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12 oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password Replace <cluster_name> with the name of the Kafka cluster. When you deploy Kafka using the Kafka resource, a secret with the cluster CA certificate is created with the Kafka cluster name ( <cluster_name> -cluster-ca-cert ). For example, my-cluster-cluster-ca-cert . Run a new interactive pod container using the AMQ Streams Kafka image to connect to a running Kafka broker. oc run --restart=Never --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 <interactive_pod_name> -- /bin/sh -c "sleep 3600" Replace <interactive_pod_name> with the name of the pod. Copy the cluster CA certificate to the interactive pod container. oc cp ca.p12 <interactive_pod_name> :/tmp Extract the user CA certificate and password from the secret of the Kafka user that has permission to access the Kafka brokers. oc get secret <kafka_user> -o jsonpath='{.data.user\.p12}' | base64 -d > user.p12 oc get secret <kafka_user> -o jsonpath='{.data.user\.password}' | base64 -d > user.password Replace <kafka_user> with the name of the Kafka user. When you create a Kafka user using the KafkaUser resource, a secret with the user CA certificate is created with the Kafka user name. For example, my-user . Copy the user CA certificate to the interactive pod container. oc cp user.p12 <interactive_pod_name> :/tmp The CA certificates allow the interactive pod container to connect to the Kafka broker using TLS. Create a config.properties file to specify the truststore and keystore used to authenticate connection to the Kafka cluster. Use the certificates and passwords you extracted in the steps. bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6 1 The bootstrap server address to connect to the Kafka cluster. Use your own Kafka cluster name to replace <kafka_cluster_name> . 2 The security protocol option when using TLS for encryption. 3 The truststore location contains the public key certificate ( ca.p12 ) for the Kafka cluster. 4 The password ( ca.password ) for accessing the truststore. 5 The keystore location contains the public key certificate ( user.p12 ) for the Kafka user. 6 The password ( user.password ) for accessing the keystore. Copy the config.properties file to the interactive pod container. oc cp config.properties <interactive_pod_name> :/tmp/config.properties Prepare a JSON file named topics.json that specifies the topics to move. Specify topic names as a comma-separated list. Example JSON file to reassign all the partitions of my-topic { "version": 1, "topics": [ { "topic": "my-topic"} ] } You can also use this file to change the replication factor of a topic . Copy the topics.json file to the interactive pod container. oc cp topics.json <interactive_pod_name> :/tmp/topics.json Start a shell process in the interactive pod container. oc exec -n <namespace> -ti <interactive_pod_name> /bin/bash Replace <namespace> with the OpenShift namespace where the pod is running. Use the kafka-reassign-partitions.sh command to generate the reassignment JSON. Example command to move the partitions of my-topic to specified brokers bin/kafka-reassign-partitions.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --topics-to-move-json-file /tmp/topics.json \ --broker-list 0,1,2,3,4 \ --generate Additional resources Configuring Kafka Section 9.4, "Configuring Kafka topics" Section 10.1, "Configuring Kafka users" 19.3. Reassigning partitions after adding brokers Use a reassignment file generated by the kafka-reassign-partitions.sh tool to reassign partitions after increasing the number of brokers in a Kafka cluster. The reassignment file should describe how partitions are reassigned to brokers in the enlarged Kafka cluster. You apply the reassignment specified in the file to the brokers and then verify the new partition assignments. This procedure describes a secure scaling process that uses TLS. You'll need a Kafka cluster that uses TLS encryption and mTLS authentication. The kafka-reassign-partitions.sh tool can be used to reassign partitions within a Kafka cluster, regardless of whether you are managing all nodes through the cluster or using the node pools preview to manage groups of nodes within the cluster. Note Though you can use the kafka-reassign-partitions.sh tool, Cruise Control is recommended for automated partition reassignments and cluster rebalancing . Cruise Control can move topics from one broker to another without any downtime, and it is the most efficient way to reassign partitions. Prerequisites You have a running Kafka cluster based on a Kafka resource configured with internal TLS encryption and mTLS authentication. You have generated a reassignment JSON file named reassignment.json . You are running an interactive pod container that is connected to the running Kafka broker. You are connected as a KafkaUser configured with ACL rules that specify permission to manage the Kafka cluster and its topics. Procedure Add as many new brokers as you need by increasing the Kafka.spec.kafka.replicas configuration option. Verify that the new broker pods have started. If you haven't done so, run an interactive pod container to generate a reassignment JSON file named reassignment.json . Copy the reassignment.json file to the interactive pod container. oc cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json Replace <interactive_pod_name> with the name of the pod. Start a shell process in the interactive pod container. oc exec -n <namespace> -ti <interactive_pod_name> /bin/bash Replace <namespace> with the OpenShift namespace where the pod is running. Run the partition reassignment using the kafka-reassign-partitions.sh script from the interactive pod container. bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --execute Replace <cluster_name> with the name of your Kafka cluster. For example, my-cluster-kafka-bootstrap:9093 If you are going to throttle replication, you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --throttle 5000000 \ --execute This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file. If you need to change the throttle during reassignment, you can use the same command with a different throttled rate. For example: bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --throttle 10000000 \ --execute Verify that the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the step, but with the --verify option instead of the --execute option. bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --verify The reassignment has finished when the --verify command reports that each of the partitions being moved has completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. 19.4. Reassigning partitions before removing brokers Use a reassignment file generated by the kafka-reassign-partitions.sh tool to reassign partitions before decreasing the number of brokers in a Kafka cluster. The reassignment file must describe how partitions are reassigned to the remaining brokers in the Kafka cluster. You apply the reassignment specified in the file to the brokers and then verify the new partition assignments. Brokers in the highest numbered pods are removed first. This procedure describes a secure scaling process that uses TLS. You'll need a Kafka cluster that uses TLS encryption and mTLS authentication. The kafka-reassign-partitions.sh tool can be used to reassign partitions within a Kafka cluster, regardless of whether you are managing all nodes through the cluster or using the node pools preview to manage groups of nodes within the cluster. Note Though you can use the kafka-reassign-partitions.sh tool, Cruise Control is recommended for automated partition reassignments and cluster rebalancing . Cruise Control can move topics from one broker to another without any downtime, and it is the most efficient way to reassign partitions. Prerequisites You have a running Kafka cluster based on a Kafka resource configured with internal TLS encryption and mTLS authentication. You have generated a reassignment JSON file named reassignment.json . You are running an interactive pod container that is connected to the running Kafka broker. You are connected as a KafkaUser configured with ACL rules that specify permission to manage the Kafka cluster and its topics. Procedure If you haven't done so, run an interactive pod container to generate a reassignment JSON file named reassignment.json . Copy the reassignment.json file to the interactive pod container. oc cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json Replace <interactive_pod_name> with the name of the pod. Start a shell process in the interactive pod container. oc exec -n <namespace> -ti <interactive_pod_name> /bin/bash Replace <namespace> with the OpenShift namespace where the pod is running. Run the partition reassignment using the kafka-reassign-partitions.sh script from the interactive pod container. bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --execute Replace <cluster_name> with the name of your Kafka cluster. For example, my-cluster-kafka-bootstrap:9093 If you are going to throttle replication, you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --throttle 5000000 \ --execute This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file. If you need to change the throttle during reassignment, you can use the same command with a different throttled rate. For example: bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --throttle 10000000 \ --execute Verify that the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the step, but with the --verify option instead of the --execute option. bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --verify The reassignment has finished when the --verify command reports that each of the partitions being moved has completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. When all the partition reassignments have finished, the brokers being removed should not have responsibility for any of the partitions in the cluster. You can verify this by checking that the broker's data log directory does not contain any live partition logs. If the log directory on the broker contains a directory that does not match the extended regular expression \.[a-z0-9] -deleteUSD , the broker still has live partitions and should not be stopped. You can check this by executing the command: oc exec my-cluster-kafka-0 -c kafka -it -- \ /bin/bash -c \ "ls -l /var/lib/kafka/kafka-log_<n>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD'" where n is the number of the pods being deleted. If the above command prints any output then the broker still has live partitions. In this case, either the reassignment has not finished or the reassignment JSON file was incorrect. When you have confirmed that the broker has no live partitions, you can edit the Kafka.spec.kafka.replicas property of your Kafka resource to reduce the number of brokers. 19.5. Changing the replication factor of topics To change the replication factor of topics in a Kafka cluster, use the kafka-reassign-partitions.sh tool. This can be done by running the tool from an interactive pod container that is connected to the Kafka cluster, and using a reassignment file to describe how the topic replicas should be changed. This procedure describes a secure process that uses TLS. You'll need a Kafka cluster that uses TLS encryption and mTLS authentication. Prerequisites You have a running Kafka cluster based on a Kafka resource configured with internal TLS encryption and mTLS authentication. You are running an interactive pod container that is connected to the running Kafka broker. You have generated a reassignment JSON file named reassignment.json . You are connected as a KafkaUser configured with ACL rules that specify permission to manage the Kafka cluster and its topics. See Generating reassignment JSON files . In this procedure, a topic called my-topic has 4 replicas and we want to reduce it to 3. A JSON file named topics.json specifies the topic, and was used to generate the reassignment.json file. Example JSON file specifies my-topic { "version": 1, "topics": [ { "topic": "my-topic"} ] } Procedure If you haven't done so, run an interactive pod container to generate a reassignment JSON file named reassignment.json . Example reassignment JSON file showing the current and proposed replica assignment Current partition replica assignment {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[3,4,2,0],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[0,2,3,1],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[1,3,0,4],"log_dirs":["any","any","any","any"]}]} Proposed partition reassignment configuration {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2,3],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3,4],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,4,0],"log_dirs":["any","any","any","any"]}]} Save a copy of this file locally in case you need to revert the changes later on. Edit the reassignment.json to remove a replica from each partition. For example use jq to remove the last replica in the list for each partition of the topic: Removing the last topic replica for each partition jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json Example reassignment file showing the updated replicas {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,4],"log_dirs":["any","any","any","any"]}]} Copy the reassignment.json file to the interactive pod container. oc cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json Replace <interactive_pod_name> with the name of the pod. Start a shell process in the interactive pod container. oc exec -n <namespace> -ti <interactive_pod_name> /bin/bash Replace <namespace> with the OpenShift namespace where the pod is running. Make the topic replica change using the kafka-reassign-partitions.sh script from the interactive pod container. bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --execute Note Removing replicas from a broker does not require any inter-broker data movement, so there is no need to throttle replication. If you are adding replicas, then you may want to change the throttle rate. Verify that the change to the topic replicas has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the step, but with the --verify option instead of the --execute option. bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --reassignment-json-file /tmp/reassignment.json \ --verify The reassignment has finished when the --verify command reports that each of the partitions being moved has completed successfully. This final --verify will also have the effect of removing any reassignment throttles. Run the bin/kafka-topics.sh command with the --describe option to see the results of the change to the topics. bin/kafka-topics.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 \ --command-config /tmp/config.properties \ --describe Results of reducing the number of replicas for a topic my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4
[ "run helper-pod -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- bash", "{ \"version\": 1, 1 \"partitions\": [ 2 { \"topic\": \"example-topic-1\", 3 \"partition\": 0, 4 \"replicas\": [1, 2, 3] 5 }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] } ] }", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "{ \"version\": 1, \"partitions\": [ { \"topic\": \"example-topic-1\", \"partition\": 0, \"replicas\": [1, 2, 3] \"log_dirs\": [\"/var/lib/kafka/data-0/kafka-log1\", \"any\", \"/var/lib/kafka/data-1/kafka-log2\"] }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] \"log_dirs\": [\"any\", \"/var/lib/kafka/data-2/kafka-log3\", \"/var/lib/kafka/data-3/kafka-log4\"] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] \"log_dirs\": [\"/var/lib/kafka/data-4/kafka-log5\", \"any\", \"/var/lib/kafka/data-5/kafka-log6\"] } ] }", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: # - name: tls port: 9093 type: internal tls: true 1 authentication: type: tls 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 3 config: retention.ms: 7200000 segment.bytes: 1073741824 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: # access to the topic - resource: type: topic name: my-topic operations: - Create - Describe - Read - AlterConfigs host: \"*\" # access to the cluster - resource: type: cluster operations: - Alter - AlterConfigs host: \"*\" # #", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "run --restart=Never --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 <interactive_pod_name> -- /bin/sh -c \"sleep 3600\"", "cp ca.p12 <interactive_pod_name> :/tmp", "get secret <kafka_user> -o jsonpath='{.data.user\\.p12}' | base64 -d > user.p12", "get secret <kafka_user> -o jsonpath='{.data.user\\.password}' | base64 -d > user.password", "cp user.p12 <interactive_pod_name> :/tmp", "bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6", "cp config.properties <interactive_pod_name> :/tmp/config.properties", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "cp topics.json <interactive_pod_name> :/tmp/topics.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/config.properties --topics-to-move-json-file /tmp/topics.json --broker-list 0,1,2,3,4 --generate", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "exec my-cluster-kafka-0 -c kafka -it -- /bin/bash -c \"ls -l /var/lib/kafka/kafka-log_<n>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'\"", "{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }", "Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[3,4,2,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[0,2,3,1],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[1,3,0,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}", "jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json", "{\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}", "cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json", "exec -n <namespace> -ti <interactive_pod_name> /bin/bash", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute", "bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify", "bin/kafka-topics.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --describe", "my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-reassign-tool-str
Deploy Red Hat Quay - High Availability
Deploy Red Hat Quay - High Availability Red Hat Quay 3.10 Deploy Red Hat Quay HA Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploy_red_hat_quay_-_high_availability/index
Chapter 2. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1]
Chapter 2. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] Description AdminPolicyBasedExternalRoute is a CRD allowing the cluster administrators to configure policies for external gateway IPs to be applied to all the pods contained in selected namespaces. Egress traffic from the pods that belong to the selected namespaces to outside the cluster is routed through these external gateway IPs. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute status object AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. 2.1.1. .spec Description AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute Type object Required from nextHops Property Type Description from object From defines the selectors that will determine the target namespaces to this CR. nextHops object NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. 2.1.2. .spec.from Description From defines the selectors that will determine the target namespaces to this CR. Type object Required namespaceSelector Property Type Description namespaceSelector object NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR 2.1.3. .spec.from.namespaceSelector Description NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.4. .spec.from.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.5. .spec.from.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.6. .spec.nextHops Description NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. Type object Property Type Description dynamic array DynamicHops defines a slices of DynamicHop. This field is optional. dynamic[] object DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. static array StaticHops defines a slice of StaticHop. This field is optional. static[] object StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. 2.1.7. .spec.nextHops.dynamic Description DynamicHops defines a slices of DynamicHop. This field is optional. Type array 2.1.8. .spec.nextHops.dynamic[] Description DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. Type object Required podSelector Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. namespaceSelector object NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. networkAttachmentName string NetworkAttachmentName determines the multus network name to use when retrieving the pod IPs that will be used as the gateway IP. When this field is empty, the logic assumes that the pod is configured with HostNetwork and is using the node's IP as gateway. podSelector object PodSelector defines the selector to filter the pods that are external gateways. 2.1.9. .spec.nextHops.dynamic[].namespaceSelector Description NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.10. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.11. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.12. .spec.nextHops.dynamic[].podSelector Description PodSelector defines the selector to filter the pods that are external gateways. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.13. .spec.nextHops.dynamic[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.14. .spec.nextHops.dynamic[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.15. .spec.nextHops.static Description StaticHops defines a slice of StaticHop. This field is optional. Type array 2.1.16. .spec.nextHops.static[] Description StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. Type object Required ip Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. ip string IP defines the static IP to be used for egress traffic. The IP can be either IPv4 or IPv6. 2.1.17. .status Description AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. Type object Required lastTransitionTime messages status Property Type Description lastTransitionTime string Captures the time when the last change was applied. messages array (string) An array of Human-readable messages indicating details about the status of the object. status string A concise indication of whether the AdminPolicyBasedRoute resource is applied with success 2.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes DELETE : delete collection of AdminPolicyBasedExternalRoute GET : list objects of kind AdminPolicyBasedExternalRoute POST : create an AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} DELETE : delete an AdminPolicyBasedExternalRoute GET : read the specified AdminPolicyBasedExternalRoute PATCH : partially update the specified AdminPolicyBasedExternalRoute PUT : replace the specified AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status GET : read status of the specified AdminPolicyBasedExternalRoute PATCH : partially update status of the specified AdminPolicyBasedExternalRoute PUT : replace status of the specified AdminPolicyBasedExternalRoute 2.2.1. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of AdminPolicyBasedExternalRoute Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AdminPolicyBasedExternalRoute Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRouteList schema 401 - Unauthorized Empty HTTP method POST Description create an AdminPolicyBasedExternalRoute Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 202 - Accepted AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 2.2.2. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an AdminPolicyBasedExternalRoute Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AdminPolicyBasedExternalRoute Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AdminPolicyBasedExternalRoute Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AdminPolicyBasedExternalRoute Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 2.2.3. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified AdminPolicyBasedExternalRoute Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AdminPolicyBasedExternalRoute Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AdminPolicyBasedExternalRoute Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/adminpolicybasedexternalroute-k8s-ovn-org-v1
Chapter 8. Endpoints [v1]
Chapter 8. Endpoints [v1] Description Endpoints is a collection of endpoints that implement the actual service. Example: Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata subsets array The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service. subsets[] object EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given: { Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}], Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}] } The resulting set of endpoints can be viewed as: a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ] 8.1.1. .subsets Description The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service. Type array 8.1.2. .subsets[] Description EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given: The resulting set of endpoints can be viewed as: Type object Property Type Description addresses array IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize. addresses[] object EndpointAddress is a tuple that describes single IP address. notReadyAddresses array IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check. notReadyAddresses[] object EndpointAddress is a tuple that describes single IP address. ports array Port numbers available on the related IP addresses. ports[] object EndpointPort is a tuple that describes a single port. 8.1.3. .subsets[].addresses Description IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize. Type array 8.1.4. .subsets[].addresses[] Description EndpointAddress is a tuple that describes single IP address. Type object Required ip Property Type Description hostname string The Hostname of this endpoint ip string The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16). nodeName string Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node. targetRef object ObjectReference contains enough information to let you inspect or modify the referred object. 8.1.5. .subsets[].addresses[].targetRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.6. .subsets[].notReadyAddresses Description IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check. Type array 8.1.7. .subsets[].notReadyAddresses[] Description EndpointAddress is a tuple that describes single IP address. Type object Required ip Property Type Description hostname string The Hostname of this endpoint ip string The IP of this endpoint. May not be loopback (127.0.0.0/8 or ::1), link-local (169.254.0.0/16 or fe80::/10), or link-local multicast (224.0.0.0/24 or ff02::/16). nodeName string Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node. targetRef object ObjectReference contains enough information to let you inspect or modify the referred object. 8.1.8. .subsets[].notReadyAddresses[].targetRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.9. .subsets[].ports Description Port numbers available on the related IP addresses. Type array 8.1.10. .subsets[].ports[] Description EndpointPort is a tuple that describes a single port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either: * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). * Kubernetes-defined prefixed names: * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 * Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port. This must match the 'name' field in the corresponding ServicePort. Must be a DNS_LABEL. Optional only if one port is defined. port integer The port number of the endpoint. protocol string The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 8.2. API endpoints The following API endpoints are available: /api/v1/endpoints GET : list or watch objects of kind Endpoints /api/v1/watch/endpoints GET : watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/endpoints DELETE : delete collection of Endpoints GET : list or watch objects of kind Endpoints POST : create Endpoints /api/v1/watch/namespaces/{namespace}/endpoints GET : watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/endpoints/{name} DELETE : delete Endpoints GET : read the specified Endpoints PATCH : partially update the specified Endpoints PUT : replace the specified Endpoints /api/v1/watch/namespaces/{namespace}/endpoints/{name} GET : watch changes to an object of kind Endpoints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 8.2.1. /api/v1/endpoints HTTP method GET Description list or watch objects of kind Endpoints Table 8.1. HTTP responses HTTP code Reponse body 200 - OK EndpointsList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/endpoints HTTP method GET Description watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. Table 8.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/endpoints HTTP method DELETE Description delete collection of Endpoints Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Endpoints Table 8.5. HTTP responses HTTP code Reponse body 200 - OK EndpointsList schema 401 - Unauthorized Empty HTTP method POST Description create Endpoints Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body Endpoints schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 201 - Created Endpoints schema 202 - Accepted Endpoints schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/endpoints HTTP method GET Description watch individual changes to a list of Endpoints. deprecated: use the 'watch' parameter with a list operation instead. Table 8.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/endpoints/{name} Table 8.10. Global path parameters Parameter Type Description name string name of the Endpoints HTTP method DELETE Description delete Endpoints Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Endpoints Table 8.13. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Endpoints Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.15. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 201 - Created Endpoints schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Endpoints Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body Endpoints schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK Endpoints schema 201 - Created Endpoints schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/endpoints/{name} Table 8.19. Global path parameters Parameter Type Description name string name of the Endpoints HTTP method GET Description watch changes to an object of kind Endpoints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
[ "Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]", "{ Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }", "a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/endpoints-v1
Chapter 7. Security
Chapter 7. Security 7.1. Securing connections with SSL/TLS AMQ JavaScript uses SSL/TLS to encrypt communication between clients and servers. To connect to a remote server with SSL/TLS, set the transport connection option to tls . Example: Enabling SSL/TLS var opts = { host: "example.com", port: 5671, transport: "tls" }; container.connect(opts); Note By default, the client will reject connections to servers with untrusted certificates. This is sometimes the case in test environments. To bypass certificate authorization, set the rejectUnauthorized connection option to false . Be aware that this compromises the security of your connection. 7.2. Connecting with a user and password AMQ JavaScript can authenticate connections with a user and password. To specify the credentials used for authentication, set the username and password connection options. Example: Connecting with a user and password var opts = { host: "example.com", username: "alice" , password: "secret" }; container.connect(opts); 7.3. Configuring SASL authentication AMQ JavaScript uses the SASL protocol to perform authentication. SASL can use a number of different authentication mechanisms . When two network peers connect, they exchange their allowed mechanisms, and the strongest mechanism allowed by both is selected. AMQ JavaScript enables SASL mechanisms based on the presence of user and password information. If the user and password are both specified, PLAIN is used. If only a user is specified, ANONYMOUS is used. If neither is specified, SASL is disabled.
[ "var opts = { host: \"example.com\", port: 5671, transport: \"tls\" }; container.connect(opts);", "var opts = { host: \"example.com\", username: \"alice\" , password: \"secret\" }; container.connect(opts);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/security
Part II. Manually installing Red Hat Enterprise Linux
Part II. Manually installing Red Hat Enterprise Linux Setting up a machine for installing Red Hat Enterprise Linux (RHEL) involves several key steps, from booting the installation media to configuring system options. After the installation ISO is booted, you can modify boot settings and monitor installation processes through various consoles and logs. By customizing the system during installation, you ensure that it is tailored to specific needs, and the initial setup process finalizes the configuration for first-time use.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/manually-installing-red-hat-enterprise-linux
8.39. ctdb
8.39. ctdb 8.39.1. RHEA-2014:1488 - ctdb bug fix and enhancement update Updated ctdb packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The ctdb packages provide a clustered database based on Samba's Trivial Database (TDB) used to store temporary data. This update also fixes the following bugs: Note The ctdb package has been upgraded to upstream version 2.5.1, which provides a number of bug fixes and enhancements over the version. Note that due to these changes, the new version cannot run in parallel with the versions on the same cluster. In addition, note that to update CTDB in an existing cluster, CTDB has to be stopped on all nodes in the cluster before the upgrade can start. Furthermore, back up your cluster nodes in case the update fails. To ensure easy recovery in case of update failure, only a single node should be updated at a time. (BZ# 1061630 , BZ# 1085447 ) This update also fixes the following bugs: Bug Fixes BZ# 987099 Prior to this update, CTDB sometimes waited too long for file locks to establish. Consequently, clients accessing a CTDB file-server cluster could time out due to high latency. With this update, the underlying code has been fixed to address the problem, and clients can now access files on a CTDB file-server cluster without timeouts. BZ# 1075913 Previously, when CTDB was configured to use two bonded interfaces, CTDB failed to assign an IP address to the second bonded interface. As a consequence, the cluster status of the cluster node was shown as "PARTIALLYONLINE" even when the actual status was "OK". The script which handles the network interfaces has been fixed and the cluster status now shows the correct value. BZ# 1085413 Prior to this update, CTDB under some circumstances attempted to free allocated memory at an invalid address which caused CTDB to terminate unexpectedly with a segmentation fault. This update fixes the underlying code and CTDB uses the correct address for freeing allocated memory. As a result, the crash no longer occurs. The ctdb package has been upgraded to upstream version 2.5.1, which provides a number of bug fixes and enhancements over the version. Note that due to these changes, the new version cannot run in parallel with the versions on the same cluster. In addition, note that to update CTDB in an existing cluster, CTDB has to be stopped on all nodes in the cluster before the upgrade can start. Furthermore, back up your cluster nodes in case the update fails. To ensure easy recovery in case of update failure, only a single node should be updated at a time. (BZ#1061630, BZ#1085447) Users of CTDB are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ctdb
Chapter 5. SelfSubjectRulesReview [authorization.openshift.io/v1]
Chapter 5. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object SelfSubjectRulesReviewSpec adds information about how to conduct the check status object SubjectRulesReviewStatus is contains the result of a rules check 5.1.1. .spec Description SelfSubjectRulesReviewSpec adds information about how to conduct the check Type object Required scopes Property Type Description scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil means "use the scopes on this request". 5.1.2. .status Description SubjectRulesReviewStatus is contains the result of a rules check Type object Required rules Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It means some error happened during evaluation that may have prevented additional rules from being populated. rules array Rules is the list of rules (no particular sort) that are allowed for the subject rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 5.1.3. .status.rules Description Rules is the list of rules (no particular sort) that are allowed for the subject Type array 5.1.4. .status.rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 5.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/selfsubjectrulesreviews Table 5.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectRulesReview Table 5.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 5.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/selfsubjectrulesreview-authorization-openshift-io-v1
Appendix D. Ceph Monitor configuration options
Appendix D. Ceph Monitor configuration options The following are Ceph monitor configuration options that can be set up during deployment. You can set these configuration options with the ceph config set mon CONFIGURATION_OPTION VALUE command. mon_initial_members Description The IDs of initial monitors in a cluster during startup. If specified, Ceph requires an odd number of monitors to form an initial quorum (for example, 3). Type String Default None mon_force_quorum_join Description Force monitor to join quorum even if it has been previously removed from the map Type Boolean Default False mon_dns_srv_name Description The service name used for querying the DNS for the monitor hosts/addresses. Type String Default ceph-mon fsid Description The cluster ID. One per cluster. Type UUID Required Yes. Default N/A. May be generated by a deployment tool if not specified. mon_data Description The monitor's data location. Type String Default /var/lib/ceph/mon/USDcluster-USDid mon_data_size_warn Description Ceph issues a HEALTH_WARN status in the cluster log when the monitor's data store reaches this threshold. The default value is 15GB. Type Integer Default 15*1024*1024*1024* mon_data_avail_warn Description Ceph issues a HEALTH_WARN status in the cluster log when the available disk space of the monitor's data store is lower than or equal to this percentage. Type Integer Default 30 mon_data_avail_crit Description Ceph issues a HEALTH_ERR status in the cluster log when the available disk space of the monitor's data store is lower or equal to this percentage. Type Integer Default 5 mon_warn_on_cache_pools_without_hit_sets Description Ceph issues a HEALTH_WARN status in the cluster log if a cache pool does not have the hit_set_type parameter set. Type Boolean Default True mon_warn_on_crush_straw_calc_version_zero Description Ceph issues a HEALTH_WARN status in the cluster log if the CRUSH's straw_calc_version is zero. See CRUSH tunables for details. Type Boolean Default True mon_warn_on_legacy_crush_tunables Description Ceph issues a HEALTH_WARN status in the cluster log if CRUSH tunables are too old (older than mon_min_crush_required_version ). Type Boolean Default True mon_crush_min_required_version Description This setting defines the minimum tunable profile version required by the cluster. Type String Default hammer mon_warn_on_osd_down_out_interval_zero Description Ceph issues a HEALTH_WARN status in the cluster log if the mon_osd_down_out_interval setting is zero, because the Leader behaves in a similar manner when the noout flag is set. Administrators find it easier to troubleshoot a cluster by setting the noout flag. Ceph issues the warning to ensure administrators know that the setting is zero. Type Boolean Default True mon_cache_target_full_warn_ratio Description Ceph issues a warning when between the ratio of cache_target_full and target_max_object . Type Float Default 0.66 mon_health_data_update_interval Description How often (in seconds) a monitor in the quorum shares its health status with its peers. A negative number disables health updates. Type Float Default 60 mon_health_to_clog Description This setting enables Ceph to send a health summary to the cluster log periodically. Type Boolean Default True mon_health_detail_to_clog Description This setting enable Ceph to send a health details to the cluster log periodically. Type Boolean Default True mon_op_complaint_time Description Number of seconds after which the Ceph Monitor operation is considered blocked after no updates. Type Integer Default 30 mon_health_to_clog_tick_interval Description How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. If the current health summary is empty or identical to the last time, the monitor will not send the status to the cluster log. Type Integer Default 60.000000 mon_health_to_clog_interval Description How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. The monitor will always send the summary to the cluster log. Type Integer Default 600 mon_osd_full_ratio Description The percentage of disk space used before an OSD is considered full . Type Float: Default .95 mon_osd_nearfull_ratio Description The percentage of disk space used before an OSD is considered nearfull . Type Float Default .85 mon_sync_trim_timeout Description, Type Double Default 30.0 mon_sync_heartbeat_timeout Description, Type Double Default 30.0 mon_sync_heartbeat_interval Description, Type Double Default 5.0 mon_sync_backoff_timeout Description, Type Double Default 30.0 mon_sync_timeout Description The number of seconds the monitor will wait for the update message from its sync provider before it gives up and bootstraps again. Type Double Default 60.000000 mon_sync_max_retries Description, Type Integer Default 5 mon_sync_max_payload_size Description The maximum size for a sync payload (in bytes). Type 32-bit Integer Default 1045676 paxos_max_join_drift Description The maximum Paxos iterations before we must first sync the monitor data stores. When a monitor finds that its peer is too far ahead of it, it will first sync with data stores before moving on. Type Integer Default 10 paxos_stash_full_interval Description How often (in commits) to stash a full copy of the PaxosService state. Currently this setting only affects mds , mon , auth and mgr PaxosServices. Type Integer Default 25 paxos_propose_interval Description Gather updates for this time interval before proposing a map update. Type Double Default 1.0 paxos_min Description The minimum number of paxos states to keep around Type Integer Default 500 paxos_min_wait Description The minimum amount of time to gather updates after a period of inactivity. Type Double Default 0.05 paxos_trim_min Description Number of extra proposals tolerated before trimming Type Integer Default 250 paxos_trim_max Description The maximum number of extra proposals to trim at a time Type Integer Default 500 paxos_service_trim_min Description The minimum amount of versions to trigger a trim (0 disables it) Type Integer Default 250 paxos_service_trim_max Description The maximum amount of versions to trim during a single proposal (0 disables it) Type Integer Default 500 mon_max_log_epochs Description The maximum amount of log epochs to trim during a single proposal Type Integer Default 500 mon_max_pgmap_epochs Description The maximum amount of pgmap epochs to trim during a single proposal Type Integer Default 500 mon_mds_force_trim_to Description Force monitor to trim mdsmaps to this point (0 disables it. dangerous, use with care) Type Integer Default 0 mon_osd_force_trim_to Description Force monitor to trim osdmaps to this point, even if there is PGs not clean at the specified epoch (0 disables it. dangerous, use with care) Type Integer Default 0 mon_osd_cache_size Description The size of osdmaps cache, not to rely on underlying store's cache Type Integer Default 500 mon_election_timeout Description On election proposer, maximum waiting time for all ACKs in seconds. Type Float Default 5 mon_lease Description The length (in seconds) of the lease on the monitor's versions. Type Float Default 5 mon_lease_renew_interval_factor Description mon lease * mon lease renew interval factor will be the interval for the Leader to renew the other monitor's leases. The factor should be less than 1.0 . Type Float Default 0.6 mon_lease_ack_timeout_factor Description The Leader will wait mon lease * mon lease ack timeout factor for the Providers to acknowledge the lease extension. Type Float Default 2.0 mon_accept_timeout_factor Description The Leader will wait mon lease * mon accept timeout factor for the Requesters to accept a Paxos update. It is also used during the Paxos recovery phase for similar purposes. Type Float Default 2.0 mon_min_osdmap_epochs Description Minimum number of OSD map epochs to keep at all times. Type 32-bit Integer Default 500 mon_max_pgmap_epochs Description Maximum number of PG map epochs the monitor should keep. Type 32-bit Integer Default 500 mon_max_log_epochs Description Maximum number of Log epochs the monitor should keep. Type 32-bit Integer Default 500 clock_offset Description How much to offset the system clock. See Clock.cc for details. Type Double Default 0 mon_tick_interval Description A monitor's tick interval in seconds. Type 32-bit Integer Default 5 mon_clock_drift_allowed Description The clock drift in seconds allowed between monitors. Type Float Default .050 mon_clock_drift_warn_backoff Description Exponential backoff for clock drift warnings. Type Float Default 5 mon_timecheck_interval Description The time check interval (clock drift check) in seconds for the leader. Type Float Default 300.0 mon_timecheck_skew_interval Description The time check interval (clock drift check) in seconds when in the presence of a skew in seconds for the Leader. Type Float Default 30.0 mon_max_osd Description The maximum number of OSDs allowed in the cluster. Type 32-bit Integer Default 10000 mon_globalid_prealloc Description The number of global IDs to pre-allocate for clients and daemons in the cluster. Type 32-bit Integer Default 10000 mon_sync_fs_threshold Description Synchronize with the filesystem when writing the specified number of objects. Set it to 0 to disable it. Type 32-bit Integer Default 5 mon_subscribe_interval Description The refresh interval, in seconds, for subscriptions. The subscription mechanism enables obtaining the cluster maps and log information. Type Double Default 86400.000000 mon_stat_smooth_intervals Description Ceph will smooth statistics over the last N PG maps. Type Integer Default 6 mon_probe_timeout Description Number of seconds the monitor will wait to find peers before bootstrapping. Type Double Default 2.0 mon_daemon_bytes Description The message memory cap for metadata server and OSD messages (in bytes). Type 64-bit Integer Unsigned Default 400ul << 20 mon_max_log_entries_per_event Description The maximum number of log entries per event. Type Integer Default 4096 mon_osd_prime_pg_temp Description Enables or disable priming the PGMap with the OSDs when an out OSD comes back into the cluster. With the true setting, the clients will continue to use the OSDs until the newly in OSDs as that PG peered. Type Boolean Default true mon_osd_prime_pg_temp_max_time Description How much time in seconds the monitor should spend trying to prime the PGMap when an out OSD comes back into the cluster. Type Float Default 0.5 mon_osd_prime_pg_temp_max_time_estimate Description Maximum estimate of time spent on each PG before we prime all PGs in parallel. Type Float Default 0.25 mon_osd_allow_primary_affinity Description Allow primary_affinity to be set in the osdmap. Type Boolean Default False mon_osd_pool_ec_fast_read Description Whether turn on fast read on the pool or not. It will be used as the default setting of newly created erasure pools if fast_read is not specified at create time. Type Boolean Default False mon_mds_skip_sanity Description Skip safety assertions on FSMap, in case of bugs where we want to continue anyway. Monitor terminates if the FSMap sanity check fails, but we can disable it by enabling this option. Type Boolean Default False mon_max_mdsmap_epochs Description The maximum amount of mdsmap epochs to trim during a single proposal. Type Integer Default 500 mon_config_key_max_entry_size Description The maximum size of config-key entry (in bytes). Type Integer Default 65536 mon_warn_pg_not_scrubbed_ratio Description The percentage of the scrub max interval past the scrub max interval to warn. Type float Default 0.5 mon_warn_pg_not_deep_scrubbed_ratio Description The percentage of the deep scrub interval past the deep scrub interval to warn Type float Default 0.75 mon_scrub_interval Description How often, in seconds, the monitor scrub its store by comparing the stored checksums with the computed ones of all the stored keys. Type Integer Default 3600*24 mon_scrub_timeout Description The timeout to restart scrub of mon quorum participant does not respond for the latest chunk. Type Integer Default 5 min mon_scrub_max_keys Description The maximum number of keys to scrub each time. Type Integer Default 100 mon_scrub_inject_crc_mismatch Description The probability of injecting CRC mismatches into Ceph Monitor scrub. Type Integer Default 3600*24 mon_scrub_inject_missing_keys Description The probability of injecting missing keys into mon scrub. Type float Default 0 mon_compact_on_start Description Compact the database used as Ceph Monitor store on ceph-mon start. A manual compaction helps to shrink the monitor database and improve its performance if the regular compaction fails to work. Type Boolean Default False mon_compact_on_bootstrap Description Compact the database used as Ceph Monitor store on bootstrap. The monitor starts probing each other for creating a quorum after bootstrap. If it times out before joining the quorum, it will start over and bootstrap itself again. Type Boolean Default False mon_compact_on_trim Description Compact a certain prefix (including paxos) when we trim its old states. Type Boolean Default True mon_cpu_threads Description Number of threads for performing CPU intensive work on monitor. Type Boolean Default True mon_osd_mapping_pgs_per_chunk Description We calculate the mapping from the placement group to OSDs in chunks. This option specifies the number of placement groups per chunk. Type Integer Default 4096 mon_osd_max_split_count Description Largest number of PGs per "involved" OSD to let split create. When we increase the pg_num of a pool, the placement groups will be split on all OSDs serving that pool. We want to avoid extreme multipliers on PG splits. Type Integer Default 300 rados_mon_op_timeout Description Number of seconds to wait for a response from the monitor before returning an error from a rados operation. 0 means at limit, or no wait time. Type Double Default 0 Additional Resources Pool Values CRUSH tunables
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/configuration_guide/ceph-monitor-configuration-options_conf
Chapter 90. Enabling authentication using AD User Principal Names in IdM
Chapter 90. Enabling authentication using AD User Principal Names in IdM 90.1. User principal names in an AD forest trusted by IdM As an Identity Management (IdM) administrator, you can allow AD users to use alternative User Principal Names (UPNs) to access resources in the IdM domain. A UPN is an alternative user login that AD users authenticate with in the format of user_name@KERBEROS-REALM . As an AD administrator, you can set alternative values for both user_name and KERBEROS-REALM , since you can configure both additional Kerberos aliases and UPN suffixes in an AD forest. For example, if a company uses the Kerberos realm AD.EXAMPLE.COM , the default UPN for a user is [email protected] . To allow your users to log in using their email addresses, for example user@ example.com , you can configure EXAMPLE.COM as an alternative UPN in AD. Alternative UPNs (also known as enterprise UPNs ) are especially convenient if your company has recently experienced a merge and you want to provide your users with a unified logon namespace. UPN suffixes are only visible for IdM when defined in the AD forest root. As an AD administrator, you can define UPNs with the Active Directory Domain and Trust utility or the PowerShell command line tool. Note To configure UPN suffixes for users, Red Hat recommends to use tools that perform error validation, such as the Active Directory Domain and Trust utility. Red Hat recommends against configuring UPNs through low-level modifications, such as using ldapmodify commands to set the userPrincipalName attribute for users, because Active Directory does not validate those operations. After you define a new UPN on the AD side, run the ipa trust-fetch-domains command on an IdM server to retrieve the updated UPNs. See Ensuring that AD UPNs are up-to-date in IdM . IdM stores the UPN suffixes for a domain in the multi-value attribute ipaNTAdditionalSuffixes of the subtree cn=trusted_domain_name,cn=ad,cn=trusts,dc=idm,dc=example,dc=com . Additional resources How to script UPN suffix setup in AD forest root How to manually modify AD user entries and bypass any UPN suffix validation Trust controllers and trust agents 90.2. Ensuring that AD UPNs are up-to-date in IdM After you add or remove a User Principal Name (UPN) suffix in a trusted Active Directory (AD) forest, refresh the information for the trusted forest on an IdM server. Prerequisites IdM administrator credentials. Procedure Enter the ipa trust-fetch-domains command. Note that a seemingly empty output is expected: Verification Enter the ipa trust-show command to verify that the server has fetched the new UPN. Specify the name of the AD realm when prompted: The output shows that the example.com UPN suffix is now part of the ad.example.com realm entry. 90.3. Gathering troubleshooting data for AD UPN authentication issues Follow this procedure to gather troubleshooting data about the User Principal Name (UPN) configuration from your Active Directory (AD) environment and your IdM environment. If your AD users are unable to log in using alternate UPNs, you can use this information to narrow your troubleshooting efforts. Prerequisites You must be logged in to an IdM Trust Controller or Trust Agent to retrieve information from an AD domain controller. You need root permissions to modify the following configuration files, and to restart IdM services. Procedure Open the /usr/share/ipa/smb.conf.empty configuration file in a text editor. Add the following contents to the file. Save and close the /usr/share/ipa/smb.conf.empty file. Open the /etc/ipa/server.conf configuration file in a text editor. If you do not have that file, create one. Add the following contents to the file. Save and close the /etc/ipa/server.conf file. Restart the Apache webserver service to apply the configuration changes: Retrieve trust information from your AD domain: Review the debugging output and troubleshooting information in the following log files: /var/log/httpd/error_log /var/log/samba/log.* Additional resources Using rpcclient to gather troubleshooting data for AD UPN authentication issues (Red Hat Knowledgebase)
[ "ipa trust-fetch-domains Realm-Name: ad.example.com ------------------------------- No new trust domains were found ------------------------------- ---------------------------- Number of entries returned 0 ----------------------------", "ipa trust-show Realm-Name: ad.example.com Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Trust direction: One-way trust Trust type: Active Directory domain UPN suffixes: example.com", "[global] log level = 10", "[global] debug = True", "systemctl restart httpd", "ipa trust-fetch-domains <ad.example.com>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/enabling-authentication-using-AD-User-Principal-Names-in-IdM_configuring-and-managing-idm
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named exampleQueue . For more information, see Creating a queue . 3.2. Running your first example The example creates a consumer and producer for a queue named exampleQueue . It sends a text message and then receives it back, printing the received message to the console. Procedure Use Maven to build the examples by running the following command in the <install-dir> /examples/protocols/openwire/queue directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample On Windows: > java -cp "target\classes;target\dependency\*" org.apache.activemq.artemis.jms.example.QueueExample Running it on Linux results in the following output: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message The source code for the example is in the <install-dir> /examples/protocols/openwire/queue/src directory. Additional examples are available in the <install-dir> /examples/protocols/openwire directory.
[ "mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample", "> java -cp \"target\\classes;target\\dependency\\*\" org.apache.activemq.artemis.jms.example.QueueExample", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/getting_started
Chapter 4. Network considerations
Chapter 4. Network considerations Review the strategies for redirecting your application network traffic after migration. 4.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 4.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 4.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default Ingress Controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default Ingress Controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 4.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP.
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/planning-considerations-3-4
Chapter 27. Configuring cluster quorum
Chapter 27. Configuring cluster quorum A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. The service must be loaded into all nodes or none; if it is loaded into a subset of cluster nodes, the results will be unpredictable. For information about the configuration and operation of the votequorum service, see the votequorum (5) man page. 27.1. Configuring quorum options There are some special features of quorum configuration that you can set when you create a cluster with the pcs cluster setup command. The following table summarizes these options. Table 27.1. Quorum Options Option Description auto_tie_breaker When enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the nodeid configured in auto_tie_breaker_node (or lowest nodeid if not set), will remain quorate. The other nodes will be inquorate. The auto_tie_breaker option is principally used for clusters with an even number of nodes, as it allows the cluster to continue operation with an even split. For more complex failures, such as multiple, uneven splits, it is recommended that you use a quorum device. The auto_tie_breaker option is incompatible with quorum devices. wait_for_all When enabled, the cluster will be quorate for the first time only after all nodes have been visible at least once at the same time. The wait_for_all option is primarily used for two-node clusters and for even-node clusters using the quorum device lms (last man standing) algorithm. The wait_for_all option is automatically enabled when a cluster has two nodes, does not use a quorum device, and auto_tie_breaker is disabled. You can override this by explicitly setting wait_for_all to 0. last_man_standing When enabled, the cluster can dynamically recalculate expected_votes and quorum under specific circumstances. You must enable wait_for_all when you enable this option. The last_man_standing option is incompatible with quorum devices. last_man_standing_window The time, in milliseconds, to wait before recalculating expected_votes and quorum after a cluster loses nodes. For further information about configuring and using these options, see the votequorum (5) man page. 27.2. Modifying quorum options You can modify general quorum options for your cluster with the pcs quorum update command. Executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum (5) man page. The format of the pcs quorum update command is as follows. The following series of commands modifies the wait_for_all quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running. 27.3. Displaying quorum configuration and status Once a cluster is running, you can enter the following cluster quorum commands to display the quorum configuration and status. The following command shows the quorum configuration. The following command shows the quorum runtime status. 27.4. Running inquorate clusters If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause quorum loss, you can change the value of the expected_votes parameter for the live cluster with the pcs quorum expected-votes command. This allows the cluster to continue operation when it does not have quorum. Warning Changing the expected votes in a live cluster should be done with extreme caution. If less than 50% of the cluster is running because you have manually changed the expected votes, then the other nodes in the cluster could be started separately and run cluster services, causing data corruption and other unexpected results. If you change this value, you should ensure that the wait_for_all parameter is enabled. The following command sets the expected votes in the live cluster to the specified value. This affects the live cluster only and does not change the configuration file; the value of expected_votes is reset to the value in the configuration file in the event of a reload. In a situation in which you know that the cluster is inquorate but you want the cluster to proceed with resource management, you can use the pcs quorum unblock command to prevent the cluster from waiting for all nodes when establishing quorum. Note This command should be used with extreme caution. Before issuing this command, it is imperative that you ensure that nodes that are not currently in the cluster are switched off and have no access to shared resources.
[ "pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[ time-in-ms ] [wait_for_all=[0|1]]", "pcs quorum update wait_for_all=1 Checking corosync is not running on nodes Error: node1: corosync is running Error: node2: corosync is running pcs cluster stop --all node2: Stopping Cluster (pacemaker) node1: Stopping Cluster (pacemaker) node1: Stopping Cluster (corosync) node2: Stopping Cluster (corosync) pcs quorum update wait_for_all=1 Checking corosync is not running on nodes node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded pcs quorum config Options: wait_for_all: 1", "pcs quorum [config]", "pcs quorum status", "pcs quorum expected-votes votes", "pcs quorum unblock" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-cluster-quorum-configuring-and-managing-high-availability-clusters
Chapter 4. Managing Service Registry content using the REST API
Chapter 4. Managing Service Registry content using the REST API Client applications can use Service Registry REST API operations to manage schema and API artifacts in Service Registry, for example, in a CI/CD pipeline deployed in production. The Core Registry API v2 provides operations for artifacts, versions, metadata, and rules stored in Service Registry. For detailed information, see the Apicurio Registry REST API documentation . This chapter shows examples of how to use the Core Registry API v2 to perform the following tasks: Section 4.1, "Managing schema and API artifacts using Service Registry REST API commands" Section 4.2, "Managing schema and API artifact versions using Service Registry REST API commands" Section 4.3, "Managing schema and API artifact references using Service Registry REST API commands" Section 4.4, "Exporting and importing registry data using Service Registry REST API commands" Prerequisites Chapter 1, Introduction to Service Registry Additional resources Apicurio Registry REST API documentation 4.1. Managing schema and API artifacts using Service Registry REST API commands This section shows a simple curl-based example of using the Core Registry API v2 to add and retrieve a simple schema artifact in Service Registry. Prerequisites Service Registry is installed and running in your environment. Procedure Add an artifact to Service Registry using the /groups/{group}/artifacts operation. The following example curl command adds a simple schema artifact for a share price application: USD curl -X POST -H "Content-Type: application/json; artifactType=AVRO" \ -H "X-Registry-ArtifactId: share-price" \ -H "Authorization: Bearer USDACCESS_TOKEN" \ --data '{"type":"record","name":"price","namespace":"com.example", \ "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]}' \ MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts This example adds an Apache Avro schema artifact with an artifact ID of share-price . If you do not specify a unique artifact ID, Service Registry generates one automatically as a UUID. MY-REGISTRY-URL is the host name on which Service Registry is deployed. For example: my-cluster-service-registry-myproject.example.com . This example specifies a group ID of my-group in the API path. If you do not specify a unique group ID, you must specify ../groups/default in the API path. Verify that the response includes the expected JSON body to confirm that the artifact was added. For example: {"createdBy":"","createdOn":"2021-04-16T09:07:51+0000","modifiedBy":"", "modifiedOn":"2021-04-16T09:07:51+0000","id":"share-price","version":"1", "type":"AVRO","globalId":2,"state":"ENABLED","groupId":"my-group","contentId":2} No version was specified when adding the artifact, so the default version 1 is created automatically. This was the second artifact added to Service Registry, so the global ID and content ID have a value of 2 . Retrieve the artifact content from Service Registry using its artifact ID in the API path. In this example, the specified ID is share-price : USD curl -H "Authorization: Bearer USDACCESS_TOKEN" \ MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts/share-price {"type":"record","name":"price","namespace":"com.example", "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]} Additional resources For more details, see the Apicurio Registry REST API documentation . 4.2. Managing schema and API artifact versions using Service Registry REST API commands If you do not specify an artifact version when adding schema and API artifacts using the Core Registry API v2, Service Registry generates a version automatically. The default version when creating a new artifact is 1 . Service Registry also supports custom versioning where you can specify a version using the X-Registry-Version HTTP request header as a string. Specifying a custom version value overrides the default version normally assigned when creating or updating an artifact. You can then use this version value when executing REST API operations that require a version. This section shows a simple curl-based example of using the Core Registry API v2 to add and retrieve a custom Apache Avro schema version in Service Registry. You can specify custom versions to add or update artifacts, or to add artifact versions. Prerequisites Service Registry is installed and running in your environment. Procedure Add an artifact version in the registry using the /groups/{group}/artifacts operation. The following example curl command adds a simple artifact for a share price application: USD curl -X POST -H "Content-Type: application/json; artifactType=AVRO" \ -H "X-Registry-ArtifactId: my-share-price" -H "X-Registry-Version: 1.1.1" \ -H "Authorization: Bearer USDACCESS_TOKEN" \ --data '{"type":"record","name":" p","namespace":"com.example", \ "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]}' \ MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts This example adds an Avro schema artifact with an artifact ID of my-share-price and version of 1.1.1 . If you do not specify a version, Service Registry automatically generates a default version of 1 . MY-REGISTRY-URL is the host name on which Service Registry is deployed. For example: my-cluster-service-registry-myproject.example.com . This example specifies a group ID of my-group in the API path. If you do not specify a unique group ID, you must specify ../groups/default in the API path. Verify that the response includes the expected JSON body to confirm that the custom artifact version was added. For example: {"createdBy":"","createdOn":"2021-04-16T10:51:43+0000","modifiedBy":"", "modifiedOn":"2021-04-16T10:51:43+0000","id":"my-share-price","version":"1.1.1", "type":"AVRO","globalId":3,"state":"ENABLED","groupId":"my-group","contentId":3} A custom version of 1.1.1 was specified when adding the artifact. This was the third artifact added to the registry, so the global ID and content ID have a value of 3 . Retrieve the artifact content from the registry using its artifact ID and version in the API path. In this example, the specified ID is my-share-price and the version is 1.1.1 : USD curl -H "Authorization: Bearer USDACCESS_TOKEN" \ MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts/my-share-price/versions/1.1.1 {"type":"record","name":"price","namespace":"com.example", "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]} Additional resources For more details, see the Apicurio Registry REST API documentation . 4.3. Managing schema and API artifact references using Service Registry REST API commands Some Service Registry artifact types can include artifact references from one artifact file to another. You can create efficiencies by defining reusable schema or API artifacts, and then referencing them from multiple locations in artifact references. The following artifact types support artifact references: Apache Avro Google Protobuf JSON Schema OpenAPI AsyncAPI This section shows a simple curl-based example of using the Core Registry API v2 to add and retrieve an artifact reference to a simple Avro schema artifact in Service Registry. This example first creates a schema artifact named ItemId : ItemId schema { "namespace":"com.example.common", "name":"ItemId", "type":"record", "fields":[ { "name":"id", "type":"int" } ] } This example then creates a schema artifact named Item , which includes a reference to the nested ItemId artifact. Item schema with nested ItemId schema { "namespace":"com.example.common", "name":"Item", "type":"record", "fields":[ { "name":"itemId", "type":"com.example.common.ItemId" }, ] } Prerequisites Service Registry is installed and running in your environment. Procedure Add the ItemId schema artifact that you want to create the nested artifact reference to using the /groups/{group}/artifacts operation: USD curl -X POST MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts \ -H "Content-Type: application/json; artifactType=AVRO" \ -H "X-Registry-ArtifactId: ItemId" \ -H "Authorization: Bearer USDACCESS_TOKEN" \ --data '{"namespace": "com.example.common", "type": "record", "name": "ItemId", "fields":[{"name":"id", "type":"int"}]}' This example adds an Avro schema artifact with an artifact ID of ItemId . If you do not specify a unique artifact ID, Service Registry generates one automatically as a UUID. MY-REGISTRY-URL is the host name on which Service Registry is deployed. For example: my-cluster-service-registry-myproject.example.com . This example specifies a group ID of my-group in the API path. If you do not specify a unique group ID, you must specify ../groups/default in the API path. Verify that the response includes the expected JSON body to confirm that the artifact was added. For example: {"name":"ItemId","createdBy":"","createdOn":"2022-04-14T10:50:09+0000","modifiedBy":"","modifiedOn":"2022-04-14T10:50:09+0000","id":"ItemId","version":"1","type":"AVRO","globalId":1,"state":"ENABLED","groupId":"my-group","contentId":1,"references":[]} Add the Item schema artifact that includes the artifact reference to the ItemId schema using the /groups/{group}/artifacts operation: USD curl -X POST MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts \ -H 'Content-Type: application/create.extended+json' \ -H "X-Registry-ArtifactId: Item" \ -H 'X-Registry-ArtifactType: AVRO' \ -H "Authorization: Bearer USDACCESS_TOKEN" \ --data-raw '{ "content": "{\r\n \"namespace\":\"com.example.common\",\r\n \"name\":\"Item\",\r\n \"type\":\"record\",\r\n \"fields\":[\r\n {\r\n \"name\":\"itemId\",\r\n \"type\":\"com.example.common.ItemId\"\r\n }\r\n ]\r\n}", "references": [ { "groupId": "my-group", "artifactId": "ItemId", "name": "com.example.common.ItemId", "version": "1" } ] }' For artifact references, you must specify the custom content type of application/create.extended+json , which extends the application/json content type. Verify that the response includes the expected JSON body to confirm that the artifact was created with the reference. For example: {"name":"Item","createdBy":"","createdOn":"2022-04-14T11:52:15+0000","modifiedBy":"","modifiedOn":"2022-04-14T11:52:15+0000","id":"Item","version":"1","type":"AVRO","globalId":2,"state":"ENABLED","groupId":"my-group","contentId":2, "references":[{"artifactId":"ItemId","groupId":"my-group","name":"ItemId","version":"1"}] } Retrieve the artifact reference from Service Registry by specifying the global ID of the artifact that includes the reference. In this example, the specified global ID is 2 : USD curl -H "Authorization: Bearer USDACCESS_TOKEN" MY-REGISTRY-URL/apis/registry/v2/ids/globalIds/2/references Verify that the response includes the expected JSON body for this artifact reference. For example: [{"groupId":"my-group","artifactId":"ItemId","version":"1","name":"com.example.common.ItemId"}] Additional resources For more details, see the Apicurio Registry REST API documentation . For more examples of artifact references, see the section on configuring each artifact type in Chapter 8, Configuring Kafka serializers/deserializers in Java clients . 4.4. Exporting and importing registry data using Service Registry REST API commands As an administrator, you can use the Core Registry API v2 to export data from one Service Registry instance and import into another Service Registry instance, so you can migrate data between different instances. This section shows a simple curl-based example of using the Core Registry API v2 to export and import existing data in .zip format from one Service Registry instance to another. All of the artifact data contained in the Service Registry instance is exported in the .zip file. Note You can import only Service Registry data that has been exported from another Service Registry instance. Prerequisites Service Registry is installed and running in your environment. Service Registry instances have been created: The source instance that you want to export data from contains at least one schema or API artifact. The target instance that you want to import data into is empty to preserve unique IDs. Procedure Export the Service Registry data from your existing source Service Registry instance: USD curl MY-REGISTRY-URL/apis/registry/v2/admin/export \ -H "Authorization: Bearer USDACCESS_TOKEN" \ --output my-registry-data.zip MY-REGISTRY-URL is the host name on which the source Service Registry is deployed. For example: my-cluster-source-registry-myproject.example.com . Import the registry data into your target Service Registry instance: USD curl -X POST "MY-REGISTRY-URL/apis/registry/v2/admin/import" \ -H "Content-Type: application/zip" -H "Authorization: Bearer USDACCESS_TOKEN" \ --data-binary @my-registry-data.zip MY-REGISTRY-URL is the host name on which the target Service Registry is deployed. For example: my-cluster-target-registry-myproject.example.com . Additional resources For more details, see the admin endpoint in the Apicurio Registry REST API documentation . For details on export tools for migrating from Service Registry version 1.x to 2.x, see Apicurio Registry export utility for 1.x versions .
[ "curl -X POST -H \"Content-Type: application/json; artifactType=AVRO\" -H \"X-Registry-ArtifactId: share-price\" -H \"Authorization: Bearer USDACCESS_TOKEN\" --data '{\"type\":\"record\",\"name\":\"price\",\"namespace\":\"com.example\", \"fields\":[{\"name\":\"symbol\",\"type\":\"string\"},{\"name\":\"price\",\"type\":\"string\"}]}' MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts", "{\"createdBy\":\"\",\"createdOn\":\"2021-04-16T09:07:51+0000\",\"modifiedBy\":\"\", \"modifiedOn\":\"2021-04-16T09:07:51+0000\",\"id\":\"share-price\",\"version\":\"1\", \"type\":\"AVRO\",\"globalId\":2,\"state\":\"ENABLED\",\"groupId\":\"my-group\",\"contentId\":2}", "curl -H \"Authorization: Bearer USDACCESS_TOKEN\" MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts/share-price {\"type\":\"record\",\"name\":\"price\",\"namespace\":\"com.example\", \"fields\":[{\"name\":\"symbol\",\"type\":\"string\"},{\"name\":\"price\",\"type\":\"string\"}]}", "curl -X POST -H \"Content-Type: application/json; artifactType=AVRO\" -H \"X-Registry-ArtifactId: my-share-price\" -H \"X-Registry-Version: 1.1.1\" -H \"Authorization: Bearer USDACCESS_TOKEN\" --data '{\"type\":\"record\",\"name\":\" p\",\"namespace\":\"com.example\", \"fields\":[{\"name\":\"symbol\",\"type\":\"string\"},{\"name\":\"price\",\"type\":\"string\"}]}' MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts", "{\"createdBy\":\"\",\"createdOn\":\"2021-04-16T10:51:43+0000\",\"modifiedBy\":\"\", \"modifiedOn\":\"2021-04-16T10:51:43+0000\",\"id\":\"my-share-price\",\"version\":\"1.1.1\", \"type\":\"AVRO\",\"globalId\":3,\"state\":\"ENABLED\",\"groupId\":\"my-group\",\"contentId\":3}", "curl -H \"Authorization: Bearer USDACCESS_TOKEN\" MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts/my-share-price/versions/1.1.1 {\"type\":\"record\",\"name\":\"price\",\"namespace\":\"com.example\", \"fields\":[{\"name\":\"symbol\",\"type\":\"string\"},{\"name\":\"price\",\"type\":\"string\"}]}", "{ \"namespace\":\"com.example.common\", \"name\":\"ItemId\", \"type\":\"record\", \"fields\":[ { \"name\":\"id\", \"type\":\"int\" } ] }", "{ \"namespace\":\"com.example.common\", \"name\":\"Item\", \"type\":\"record\", \"fields\":[ { \"name\":\"itemId\", \"type\":\"com.example.common.ItemId\" }, ] }", "curl -X POST MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts -H \"Content-Type: application/json; artifactType=AVRO\" -H \"X-Registry-ArtifactId: ItemId\" -H \"Authorization: Bearer USDACCESS_TOKEN\" --data '{\"namespace\": \"com.example.common\", \"type\": \"record\", \"name\": \"ItemId\", \"fields\":[{\"name\":\"id\", \"type\":\"int\"}]}'", "{\"name\":\"ItemId\",\"createdBy\":\"\",\"createdOn\":\"2022-04-14T10:50:09+0000\",\"modifiedBy\":\"\",\"modifiedOn\":\"2022-04-14T10:50:09+0000\",\"id\":\"ItemId\",\"version\":\"1\",\"type\":\"AVRO\",\"globalId\":1,\"state\":\"ENABLED\",\"groupId\":\"my-group\",\"contentId\":1,\"references\":[]}", "curl -X POST MY-REGISTRY-URL/apis/registry/v2/groups/my-group/artifacts -H 'Content-Type: application/create.extended+json' -H \"X-Registry-ArtifactId: Item\" -H 'X-Registry-ArtifactType: AVRO' -H \"Authorization: Bearer USDACCESS_TOKEN\" --data-raw '{ \"content\": \"{\\r\\n \\\"namespace\\\":\\\"com.example.common\\\",\\r\\n \\\"name\\\":\\\"Item\\\",\\r\\n \\\"type\\\":\\\"record\\\",\\r\\n \\\"fields\\\":[\\r\\n {\\r\\n \\\"name\\\":\\\"itemId\\\",\\r\\n \\\"type\\\":\\\"com.example.common.ItemId\\\"\\r\\n }\\r\\n ]\\r\\n}\", \"references\": [ { \"groupId\": \"my-group\", \"artifactId\": \"ItemId\", \"name\": \"com.example.common.ItemId\", \"version\": \"1\" } ] }'", "{\"name\":\"Item\",\"createdBy\":\"\",\"createdOn\":\"2022-04-14T11:52:15+0000\",\"modifiedBy\":\"\",\"modifiedOn\":\"2022-04-14T11:52:15+0000\",\"id\":\"Item\",\"version\":\"1\",\"type\":\"AVRO\",\"globalId\":2,\"state\":\"ENABLED\",\"groupId\":\"my-group\",\"contentId\":2, \"references\":[{\"artifactId\":\"ItemId\",\"groupId\":\"my-group\",\"name\":\"ItemId\",\"version\":\"1\"}] }", "curl -H \"Authorization: Bearer USDACCESS_TOKEN\" MY-REGISTRY-URL/apis/registry/v2/ids/globalIds/2/references", "[{\"groupId\":\"my-group\",\"artifactId\":\"ItemId\",\"version\":\"1\",\"name\":\"com.example.common.ItemId\"}]", "curl MY-REGISTRY-URL/apis/registry/v2/admin/export -H \"Authorization: Bearer USDACCESS_TOKEN\" --output my-registry-data.zip", "curl -X POST \"MY-REGISTRY-URL/apis/registry/v2/admin/import\" -H \"Content-Type: application/zip\" -H \"Authorization: Bearer USDACCESS_TOKEN\" --data-binary @my-registry-data.zip" ]
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/managing-registry-artifacts-api_registry
Chapter 1. Overview of AMQ Streams
Chapter 1. Overview of AMQ Streams AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. This guide provides instructions for configuring Kafka components and using AMQ Streams Operators. Procedures relate to how you might want to modify your deployment and introduce additional features, such as Cruise Control or distributed tracing. You can configure your deployment using AMQ Streams custom resources . The Custom resource API reference describes the properties you can use in your configuration. Note Looking to get started with AMQ Streams? For step-by-step deployment instructions, see the Deploying and Upgrading AMQ Streams on OpenShift guide . 1.1. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.2. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.3. How AMQ Streams supports Kafka AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka. Operators simplify the process of: Deploying and running Kafka clusters Deploying and running Kafka components Configuring access to Kafka Securing access to Kafka Upgrading Kafka Managing brokers Creating and managing topics Creating and managing users 1.4. AMQ Streams Operators AMQ Streams supports Kafka using Operators to deploy and manage the components and dependencies of Kafka to OpenShift. Operators are a method of packaging, deploying, and managing an OpenShift application. AMQ Streams Operators extend OpenShift functionality, automating common and complex tasks related to a Kafka deployment. By implementing knowledge of Kafka operations in code, Kafka administration tasks are simplified and require less manual intervention. Operators AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster. Operators within the AMQ Streams architecture 1.4.1. Cluster Operator AMQ Streams uses the Cluster Operator to deploy and manage clusters for: Kafka (including ZooKeeper, Entity Operator, Kafka Exporter, and Cruise Control) Kafka Connect Kafka MirrorMaker Kafka Bridge Custom resources are used to deploy the clusters. For example, to deploy a Kafka cluster: A Kafka resource with the cluster configuration is created within the OpenShift cluster. The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource. The Cluster Operator can also deploy (through configuration of the Kafka resource): A Topic Operator to provide operator-style topic management through KafkaTopic custom resources A User Operator to provide operator-style user management through KafkaUser custom resources The Topic Operator and User Operator function within the Entity Operator on deployment. Example architecture for the Cluster Operator 1.4.2. Topic Operator The Topic Operator provides a way of managing topics in a Kafka cluster through OpenShift resources. Example architecture for the Topic Operator The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics. Specifically, if a KafkaTopic is: Created, the Topic Operator creates the topic Deleted, the Topic Operator deletes the topic Changed, the Topic Operator updates the topic Working in the other direction, if a topic is: Created within the Kafka cluster, the Operator creates a KafkaTopic Deleted from the Kafka cluster, the Operator deletes the KafkaTopic Changed in the Kafka cluster, the Operator updates the KafkaTopic This allows you to declare a KafkaTopic as part of your application's deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics. If the topic is reconfigured or reassigned to different Kafka nodes, the KafkaTopic will always be up to date. 1.4.3. User Operator The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster. For example, if a KafkaUser is: Created, the User Operator creates the user it describes Deleted, the User Operator deletes the user it describes Changed, the User Operator updates the user it describes Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Kafka topics can be created by applications directly in Kafka, but it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator. The User Operator allows you to declare a KafkaUser resource as part of your application's deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker. When the user is created, the user credentials are created in a Secret . Your application needs to use the user and its credentials for authentication and to produce or consume messages. In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user's access rights in the KafkaUser declaration. 1.5. AMQ Streams custom resources A deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. Custom resources are created as instances of APIs added by Custom resource definitions (CRDs) to extend OpenShift resources. CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution. CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation. Additional resources Extend the Kubernetes API with CustomResourceDefinitions 1.5.1. AMQ Streams custom resource example CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage AMQ Streams-specific resources. After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. Depending on the cluster setup, installation typically requires cluster admin privileges. Note Access to manage custom resources is limited to AMQ Streams administrators. For more information, see Designating AMQ Streams administrators in the Deploying and Upgrading AMQ Streams on OpenShift guide. A CRD defines a new kind of resource, such as kind:Kafka , within an OpenShift cluster. The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster. Warning When CRDs are deleted, custom resources of that type are also deleted. Additionally, the resources created by the custom resource, such as pods and statefulsets are also deleted. Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource's kind . The custom resources for AMQ Streams components have common configuration properties, which are defined under spec . To understand the relationship between a CRD and a custom resource, let's look at a sample of the CRD for a Kafka topic. Kafka topic CRD apiVersion: kafka.strimzi.io/v1beta1 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta1 scope: Namespaced names: # ... singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # ... subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 # ... 1 The metadata for the topic CRD, its name and a label to identify the CRD. 2 The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics . 3 The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic . 4 The information presented when using a get command on the custom resource. 5 The current status of the CRD as described in the schema reference for the resource. 6 openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica. Note You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by 'Crd'. Here is a corresponding example of a KafkaTopic custom resource. Kafka topic custom resource apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: "2019-08-20T11:37:00.706Z" status: "True" type: Ready observedGeneration: 1 / ... 1 The kind and apiVersion identify the CRD of which the custom resource is an instance. 2 A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs. 3 The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified. 4 Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime . Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API. After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams. 1.6. Listener configuration Listeners are used to connect to Kafka brokers. AMQ Streams provides a generic GenericKafkaListener schema with properties to configure listeners through the Kafka resource. The GenericKafkaListener provides a flexible approach to listener configuration. You can specify properties to configure internal listeners for connecting within the OpenShift cluster, or external listeners for connecting outside the OpenShift cluster. Generic listener configuration Each listener is defined as an array in the Kafka resource . For more information on listener configuration, see the GenericKafkaListener schema reference . Generic listener configuration replaces the approach to listener configuration using the KafkaListeners schema reference , which is deprecated . However, you can convert the old format into the new format with backwards compatibility. The KafkaListeners schema uses sub-properties for plain , tls and external listeners, with fixed ports for each. Because of the limits inherent in the architecture of the schema, it is only possible to configure three listeners, with configuration options limited to the type of listener. With the GenericKafkaListener schema, you can configure as many listeners as required, as long as their names and ports are unique. You might want to configure multiple external listeners, for example, to handle access from networks that require different authentication mechanisms. Or you might need to join your OpenShift network to an outside network. In which case, you can configure internal listeners (using the useServiceDnsDomain property) so that the OpenShift service DNS domain (typically .cluster.local ) is not used. Configuring listeners to secure access to Kafka brokers You can configure listeners for secure connection using authentication. For more information on securing access to Kafka brokers, see Managing access to Kafka . Configuring external listeners for client access outside OpenShift You can configure external listeners for client access outside an OpenShift environment using a specified connection mechanism, such as a loadbalancer. For more information on the configuration options for connecting an external client, see Configuring external listeners . Listener certificates You can provide your own server certificates, called Kafka listener certificates , for TLS listeners or external listeners which have TLS encryption enabled. For more information, see Kafka listener certificates . 1.7. Document Conventions Replaceables In this document, replaceable text is styled in monospace , with italics, uppercase, and hyphens. For example, in the following code, you will want to replace MY-NAMESPACE with the name of your namespace:
[ "apiVersion: kafka.strimzi.io/v1beta1 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta1 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /", "sed -i 's/namespace: .*/namespace: MY-NAMESPACE /' install/cluster-operator/*RoleBinding*.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/overview-str
Virtualization
Virtualization OpenShift Container Platform 4.17 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/virtualization/index
Chapter 37. Kernel Modules
Chapter 37. Kernel Modules The Linux kernel has a modular design. At boot time, only a minimal resident kernel is loaded into memory. Thereafter, whenever a user requests a feature that is not present in the resident kernel, a kernel module , sometimes referred to as a driver , is dynamically loaded into memory. During installation, the hardware on the system is probed. Based on this probing and the information provided by the user, the installation program decides which modules need to be loaded at boot time. The installation program sets up the dynamic loading mechanism to work transparently. If new hardware is added after installation and the hardware requires a kernel module, the system must be configured to load the proper kernel module for the new hardware. When the system is booted with the new hardware, the Kudzu program runs, detects the new hardware if it is supported, and configures the module for it. The module can also be specified manually by editing the module configuration file, /etc/modprobe.conf . Note Video card modules used to display the X Window System interface are part of the xorg-X11 packages, not the kernel; thus, this chapter does not apply to them. For example, if a system included an SMC EtherPower 10 PCI network adapter, the module configuration file contains the following line: If a second network card is added to the system and is identical to the first card, add the following line to /etc/modprobe.conf : Refer to the Reference Guide for an alphabetical list of kernel modules and supported hardware for those modules. 37.1. Kernel Module Utilities A group of commands for managing kernel modules is available if the module-init-tools package is installed. Use these commands to determine if a module has been loaded successfully or when trying different modules for a piece of new hardware. The command /sbin/lsmod displays a list of currently loaded modules. For example: For each line, the first column is the name of the module, the second column is the size of the module, and the third column is the use count. The /sbin/lsmod output is less verbose and easier to read than the output from viewing /proc/modules . To load a kernel module, use the /sbin/modprobe command followed by the kernel module name. By default, modprobe attempts to load the module from the /lib/modules/ <kernel-version> /kernel/drivers/ subdirectories. There is a subdirectory for each type of module, such as the net/ subdirectory for network interface drivers. Some kernel modules have module dependencies, meaning that other modules must be loaded first for it to load. The /sbin/modprobe command checks for these dependencies and loads the module dependencies before loading the specified module. For example, the command loads any module dependencies and then the e100 module. To print to the screen all commands as /sbin/modprobe executes them, use the -v option. For example: Output similar to the following is displayed: The /sbin/insmod command also exists to load kernel modules; however, it does not resolve dependencies. Thus, it is recommended that the /sbin/modprobe command be used. To unload kernel modules, use the /sbin/rmmod command followed by the module name. The rmmod utility only unloads modules that are not in use and that are not a dependency of other modules in use. For example, the command unloads the e100 kernel module. Another useful kernel module utility is modinfo . Use the command /sbin/modinfo to display information about a kernel module. The general syntax is: Options include -d , which displays a brief description of the module, and -p , which lists the parameters the module supports. For a complete list of options, refer to the modinfo man page ( man modinfo ).
[ "alias eth0 tulip", "alias eth1 tulip", "Module Size Used by nfs 218437 1 lockd 63977 2 nfs parport_pc 24705 1 lp 12077 0 parport 37129 2 parport_pc,lp autofs4 23237 2 i2c_dev 11329 0 i2c_core 22081 1 i2c_dev sunrpc 157093 5 nfs,lockd button 6481 0 battery 8901 0 ac 4805 0 md5 4033 1 ipv6 232833 16 ohci_hcd 21713 0 e100 39493 0 mii 4673 1 e100 floppy 58481 0 sg 33377 0 dm_snapshot 17029 0 dm_zero 2369 0 dm_mirror 22957 2 ext3 116809 2 jbd 71257 1 ext3 dm_mod 54741 6 dm_snapshot,dm_zero,dm_mirror ips 46173 2 aic7xxx 148121 0 sd_mod 17217 3 scsi_mod 121421 4 sg,ips,aic7xxx,sd_mod", "/sbin/modprobe e100", "/sbin/modprobe -v e100", "/sbin/insmod /lib/modules/2.6.9-5.EL/kernel/drivers/net/e100.ko Using /lib/modules/2.6.9-5.EL/kernel/drivers/net/e100.ko Symbol version prefix 'smp_'", "/sbin/rmmod e100", "/sbin/modinfo [options] <module>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Kernel_Modules
Chapter 3. Shenandoah garbage collector modes
Chapter 3. Shenandoah garbage collector modes You can run Shenandoah in three different modes. Select a specific mode with the -XX:ShenandoahGCMode=<name> . The following list describes each Shenandoah mode: normal/satb (product, default) This mode runs a concurrent garbage collector (GC) with Snapshot-At-The-Beginning (SATB) marking. This marking mode does the similar work as G1, the default garbage collector for Red Hat build of OpenJDK 21. iu (experimental) This mode runs a concurrent GC with Incremental Update (IU) marking. It can reclaim unreachably memory more aggressively. This marking mode mirrors the SATB mode. This may make marking less conservative, especially around accessing weak references. passive (diagnostic) This mode runs Stop the World Event GCs. This mode is used for functional testing, but sometimes it is useful for bisecting performance anomalies with GC barriers, or to ascertain the actual live data size in the application.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_21/different-modes-to-run-shenandoah-gc
Chapter 24. Scheduler [config.openshift.io/v1]
Chapter 24. Scheduler [config.openshift.io/v1] Description Scheduler holds cluster-wide config information to run the Kubernetes Scheduler and influence its placement decisions. The canonical name for this config is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 24.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description defaultNodeSelector string defaultNodeSelector helps set the cluster-wide default node selector to restrict pod placement to specific nodes. This is applied to the pods created in all namespaces and creates an intersection with any existing nodeSelectors already set on a pod, additionally constraining that pod's selector. For example, defaultNodeSelector: "type=user-node,region=east" would set nodeSelector field in pod spec to "type=user-node,region=east" to all pods created in all namespaces. Namespaces having project-wide node selectors won't be impacted even if this field is set. This adds an annotation section to the namespace. For example, if a new namespace is created with node-selector='type=user-node,region=east', the annotation openshift.io/node-selector: type=user-node,region=east gets added to the project. When the openshift.io/node-selector annotation is set on the project the value is used in preference to the value we are setting for defaultNodeSelector field. For instance, openshift.io/node-selector: "type=user-node,region=west" means that the default of "type=user-node,region=east" set in defaultNodeSelector would not be applied. mastersSchedulable boolean MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the master nodes in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the master nodes are schedulable. Important Note: Once the workload pods start running on the master nodes, extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. policy object DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. profile string profile sets which scheduling profile should be set in order to configure scheduling decisions for new pods. Valid values are "LowNodeUtilization", "HighNodeUtilization", "NoScoring" Defaults to "LowNodeUtilization" 24.1.2. .spec.policy Description DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 24.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 24.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/schedulers DELETE : delete collection of Scheduler GET : list objects of kind Scheduler POST : create a Scheduler /apis/config.openshift.io/v1/schedulers/{name} DELETE : delete a Scheduler GET : read the specified Scheduler PATCH : partially update the specified Scheduler PUT : replace the specified Scheduler /apis/config.openshift.io/v1/schedulers/{name}/status GET : read status of the specified Scheduler PATCH : partially update status of the specified Scheduler PUT : replace status of the specified Scheduler 24.2.1. /apis/config.openshift.io/v1/schedulers HTTP method DELETE Description delete collection of Scheduler Table 24.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Scheduler Table 24.2. HTTP responses HTTP code Reponse body 200 - OK SchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a Scheduler Table 24.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.4. Body parameters Parameter Type Description body Scheduler schema Table 24.5. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 202 - Accepted Scheduler schema 401 - Unauthorized Empty 24.2.2. /apis/config.openshift.io/v1/schedulers/{name} Table 24.6. Global path parameters Parameter Type Description name string name of the Scheduler HTTP method DELETE Description delete a Scheduler Table 24.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Scheduler Table 24.9. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Scheduler Table 24.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.11. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Scheduler Table 24.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.13. Body parameters Parameter Type Description body Scheduler schema Table 24.14. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty 24.2.3. /apis/config.openshift.io/v1/schedulers/{name}/status Table 24.15. Global path parameters Parameter Type Description name string name of the Scheduler HTTP method GET Description read status of the specified Scheduler Table 24.16. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Scheduler Table 24.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.18. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Scheduler Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body Scheduler schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/scheduler-config-openshift-io-v1
8.2.3.3. Differential Backups
8.2.3.3. Differential Backups Differential backups are similar to incremental backups in that both backup only modified files. However, differential backups are cumulative -- in other words, with a differential backup, once a file has been modified it continues to be included in all subsequent differential backups (until the , full backup, of course). This means that each differential backup contains all the files modified since the last full backup, making it possible to perform a complete restoration with only the last full backup and the last differential backup. Like the backup strategy used with incremental backups, differential backups normally follow the same approach: a single periodic full backup followed by more frequent differential backups. The effect of using differential backups in this way is that the differential backups tend to grow a bit over time (assuming different files are modified over the time between full backups). This places differential backups somewhere between incremental backups and full backups in terms of backup media utilization and backup speed, while often providing faster single-file and complete restorations (due to fewer backups to search/restore). Given these characteristics, differential backups are worth careful consideration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-disaster-backups-types-diff
Managing system content and patch updates with Red Hat Insights
Managing system content and patch updates with Red Hat Insights Red Hat Insights 1-latest How to review applicable advisories and affected systems, manage system content, and remediate issues Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/managing_system_content_and_patch_updates_with_red_hat_insights/index
Installing on IBM Cloud
Installing on IBM Cloud OpenShift Container Platform 4.15 Installing OpenShift Container Platform IBM Cloud Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_cloud/index
Image APIs
Image APIs OpenShift Container Platform 4.14 Reference guide for image APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/index
Part I. Troubleshoot
Part I. Troubleshoot
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/troubleshoot
A.11. Optional Workaround to Allow for Graceful Shutdown
A.11. Optional Workaround to Allow for Graceful Shutdown The libvirt-guests service has parameter settings that can be configured to assure that the guest can shutdown properly. It is a package that is a part of the libvirt installation and is installed by default. This service automatically saves guests to the disk when the host shuts down, and restores them to their pre-shutdown state when the host reboots. By default, this setting is set to suspend the guest. If you want the guest to be gracefully shutdown, you will need to change one of the parameters of the libvirt-guests configuration file. Procedure A.5. Changing the libvirt-guests service parameters to allow for the graceful shutdown of guests The procedure described here allows for the graceful shutdown of guest virtual machines when the host physical machine is stuck, powered off, or needs to be restarted. Open the configuration file The configuration file is located in /etc/sysconfig/libvirt-guests . Edit the file, remove the comment mark (#) and change the ON_SHUTDOWN=suspend to ON_SHUTDOWN=shutdown . Remember to save the change. URIS - checks the specified connections for a running guest. The Default setting functions in the same manner as virsh does when no explicit URI is set In addition, one can explicitly set the URI from /etc/libvirt/libvirt.conf . Note that when using the libvirt configuration file default setting, no probing will be used. ON_BOOT - specifies the action to be done to / on the guests when the host boots. The start option starts all guests that were running prior to shutdown regardless on their autostart settings. The ignore option will not start the formally running guest on boot, however, any guest marked as autostart will still be automatically started by libvirtd . The START_DELAY - sets a delay interval in between starting up the guests. This time period is set in seconds. Use the 0 time setting to make sure there is no delay and that all guests are started simultaneously. ON_SHUTDOWN - specifies the action taken when a host shuts down. Options that can be set include: suspend which suspends all running guests using virsh managedsave and shutdown which shuts down all running guests. It is best to be careful with using the shutdown option as there is no way to distinguish between a guest which is stuck or ignores shutdown requests and a guest that just needs a longer time to shutdown. When setting the ON_SHUTDOWN=shutdown , you must also set SHUTDOWN_TIMEOUT to a value suitable for the guests. PARALLEL_SHUTDOWN Dictates that the number of guests on shutdown at any time will not exceed number set in this variable and the guests will be suspended concurrently. If set to 0 , then guests are not shutdown concurrently. Number of seconds to wait for a guest to shut down. If SHUTDOWN_TIMEOUT is enabled, this timeout applies as a timeout for shutting down all guests on a single URI defined in the variable URIS. If SHUTDOWN_TIMEOUT is set to 0 , then there is no timeout (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). BYPASS_CACHE can have 2 values, 0 to disable and 1 to enable. If enabled it will by-pass the file system cache when guests are restored. Note that setting this may effect performance and may cause slower operation for some file systems. Start libvirt-guests service If you have not started the service, start the libvirt-guests service. Do not restart the service as this will cause all running guest virtual machines to shutdown.
[ "vi /etc/sysconfig/libvirt-guests URIs to check for running guests example: URIS='default xen:/// vbox+tcp://host/system lxc:///' #URIS=default action taken on host boot - start all guests which were running on shutdown are started on boot regardless on their autostart settings - ignore libvirt-guests init script won't start any guest on boot, however, guests marked as autostart will still be automatically started by libvirtd #ON_BOOT=start Number of seconds to wait between each guest start. Set to 0 to allow parallel startup. #START_DELAY=0 action taken on host shutdown - suspend all running guests are suspended using virsh managedsave - shutdown all running guests are asked to shutdown. Please be careful with this settings since there is no way to distinguish between a guest which is stuck or ignores shutdown requests and a guest which just needs a long time to shutdown. When setting ON_SHUTDOWN=shutdown, you must also set SHUTDOWN_TIMEOUT to a value suitable for your guests. ON_SHUTDOWN=shutdown If set to non-zero, shutdown will suspend guests concurrently. Number of guests on shutdown at any time will not exceed number set in this variable. #PARALLEL_SHUTDOWN=0 Number of seconds we're willing to wait for a guest to shut down. If parallel shutdown is enabled, this timeout applies as a timeout for shutting down all guests on a single URI defined in the variable URIS. If this is 0, then there is no time out (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). #SHUTDOWN_TIMEOUT=300 If non-zero, try to bypass the file system cache when saving and restoring guests, even though this may give slower operation for some file systems. #BYPASS_CACHE=0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-manipulating_the_libvirt_guests_configuration_settings
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/operational_measurements/making-open-source-more-inclusive
15.9. Using GVFS Metadata
15.9. Using GVFS Metadata GVFS has its metadata storage implemented as a set of simple key/value pairs information bound to a particular file. Thus, there is a tool for a user or application to save small data designed for runtime information such as icon position, last-played location, position in a document, emblems, notes, and so on. Whenever a file or directory is moved, metadata is moved accordingly so that it stays connected to the respective file. GVFS stores all metadata privately so it is available only on the machine. However, GVFS mounts and removable media are tracked as well. Note Removable media are now mounted in the /run/media/ instead of the /media directory. To view and manipulate with metadata, you can use: the gvfs-info command; the gvfs-set-attribute command; or any other native GIO way of working with attributes. In the following example, a custom metadata attribute is set. Notice the differences between particular gvfs-info calls and data persistence after a move or rename (note the gvfs-info command output): Example 15.5. Setting Custom Metadata Attribute
[ "touch /tmp/myfile gvfs-info -a 'metadata::*' /tmp/myfile attributes: gvfs-set-attribute -t string /tmp/myfile 'metadata::mynote' 'Please remember to delete this file!' gvfs-info -a 'metadata::*' /tmp/myfile attributes: metadata::mynote: Please remember to delete this file! gvfs-move /tmp/myfile /tmp/newfile gvfs-info -a 'metadata::*' /tmp/newfile attributes: metadata::mynote: Please remember to delete this file!" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/using-gvfs-metadata
Chapter 4. Kernel features
Chapter 4. Kernel features This chapter explains the purpose and use of kernel features that enable many user space tools and includes resources for further investigation of those tools. 4.1. Control groups 4.1.1. What is a control group? Note Control Group Namespaces are a Technology Preview in Red Hat Enterprise Linux 7.5 Linux Control Groups (cgroups) enable limits on the use of system hardware, ensuring that an individual process running inside a cgroup only utilizes as much as has been allowed in the cgroups configuration. Control Groups restrict the volume of usage on a resource that has been enabled by a namespace . For example, the network namespace allows a process to access a particular network card, the cgroup ensures that the process does not exceed 50% usage of that card, ensuring bandwidth is available for other processes. Control Group Namespaces provide a virtualized view of individual cgroups through the /proc/self/ns/cgroup interface. The purpose is to prevent leakage of privileged data from the global namespaces to the cgroup and to enable other features, such as container migration. Because it is now much easier to associate a container with a single cgroup, containers have a much more coherent cgroup view, it also enables tasks inside the container to have a virtualized view of the cgroup it belongs to. 4.1.2. What is a namespace? Namespaces are a kernel feature that allow a virtual view of isolated system resources. By isolating a process from system resources, you can specify and control what a process is able to interact with. Namespaces are an essential part of Control Groups. 4.1.3. Supported namespaces The following namespaces are supported from Red Hat Enterprise Linux 7.5 and later Mount The mount namespace isolates file system mount points, enabling each process to have a distinct filesystem space within wich to operate. UTS Hostname and NIS domain name IPC System V IPC, POSIX message queues PID Process IDs Network Network devices, stacks, ports, etc. User User and group IDs Control Groups Isolates cgroups Note Usage of Control Groups is documented in the Resource Management Guide 4.2. Kernel source checker The Linux Kernel Module Source Checker (ksc) is a tool to check for non whitelist symbols in a given kernel module. Red Hat Partners can also use the tool to request review of a symbol for whitelist inclusion, by filing a bug in Red Hat bugzilla database. 4.2.1. Usage The tool accepts the path to a module with the "-k" option Output is saved in USDHOME/ksc-result.txt . If review of the symbols for whitelist addition is requested, then the usage description for each non-whitelisted symbol must be added to the ksc-result.txt file. The request bug can then be filed by running ksc with the "-p" option. Note KSC currently does not support xz compression The ksc tool is unable to process the xz compression method and reports the following error: Until this limitation is resolved, system administrators need to manually uncompress any third party modules using xz compression, before running the ksc tool. 4.3. Direct access for files (DAX) Direct Access for files, known as 'file system dax', or 'fs dax', enables applications to read and write data on a dax-capable storage device without using the page cache to buffer access to the device. This functionality is available when using the 'ext4' or 'xfs' file system, and is enabled either by mounting the file system with -o dax or by adding dax to the options section for the mount entry in /etc/fstab . Further information, including code examples can be found in the kernel-doc package and is stored at /usr/share/doc/kernel-doc-<version>/Documentation/filesystems/dax.txt where '<version>' is the corresponding kernel version number. 4.4. Memory protection keys for userspace (also known as PKU, or PKEYS) Memory Protection Keys provide a mechanism for enforcing page-based protections, but without requiring modification of the page tables when an application changes protection domains. It works by dedicating 4 previously ignored bits in each page table entry to a "protection key", giving 16 possible keys. Memory Protection Keys are hardware feature of some Intel CPU chipsets. To determine if your processor supports this feature, check for the presence of pku in /proc/cpuinfo To support this feature, the CPUs provide a new user-accessible register (PKRU) with two separate bits (Access Disable and Write Disable) for each key. Two new instructions (RDPKRU and WRPKRU) exist for reading and writing to the new register. Further documentation, including programming examples can be found in /usr/share/doc/kernel-doc-*/Documentation/x86/protection-keys.txt which is provided by the kernel-doc package. 4.5. Kernel adress space layout randomization Kernel Adress Space Layout Randomization (KASLR) consists of two parts which work together to enhance the security of the Linux kernel: kernel text KASLR memory management KASLR The physical address and virtual address of kernel text itself are randomized to a different position separately. The physical address of the kernel can be anywhere under 64TB, while the virtual address of the kernel is restricted between [0xffffffff80000000, 0xffffffffc0000000], the 1GB space. Memory management KASLR has three sections whose starting address is randomized in a specific area. KASLR can thus prevent inserting and redirecting the execution of the kernel to a malicious code if this code relies on knowing where symbols of interest are located in the kernel address space. Memory management KASLR sections are: direct mapping section vmalloc section vmemmap section KASLR code is now compiled into the Linux kernel, and it is enabled by default. To disable it explicitly, add the nokaslr kernel option to the kernel command line. 4.6. Advanced Error Reporting (AER) 4.6.1. What is AER Advanced Error Reporting ( AER ) is a kernel feature that provides enhanced error reporting for Peripheral Component Interconnect Express ( PCIe ) devices. The AER kernel driver attaches root ports which support PCIe AER capability in order to: Gather the comprehensive error information if errors occurred Report error to the users Perform error recovery actions Example 4.1. Example AER output Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: AER: Corrected error received: id=ae00 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: AER: Multiple Corrected error received: id=ae00 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0000(Receiver ID) Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: device [8086:2030] error status/mask=000000c0/00002000 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: [ 6] Bad TLP Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: [ 7] Bad DLLP Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: AER: Multiple Corrected error received: id=ae00 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0000(Receiver ID) Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: device [8086:2030] error status/mask=00000040/00002000 When AER captures an error, it sends an error message to the console. If the error is repairable, the console output is a warning. 4.6.2. Collecting and displaying AER messages In order to collect and display AER messages, use the rasdaemon program. Procedure Install the rasdaemon package. ~]# yum install rasdaemon Enable and start the rasdaemon service. ~]# systemctl enable --now rasdaemon Run the ras-mc-ctl command that displays a summary of the logged errors (the --summary option) or displays the errors stored at the error database (the --errors option). ~]# ras-mc-ctl --summary ~]# ras-mc-ctl --errors Additional resources For more information on the rasdaemon service, see the rasdaemon(8) manual page. For more information on the ras-mc-ctl service, see the ras-mc-ctl(8) manual page.
[ "ksc -k e1000e.ko Checking against architecture x86_64 Total symbol usage: 165 Total Non white list symbol usage: 74 ksc -k /path/to/module", "Invalid architecture, (Only kernel object files are supported)", "grep pku /proc/cpuinfo", "Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: AER: Corrected error received: id=ae00 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: AER: Multiple Corrected error received: id=ae00 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0000(Receiver ID) Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: device [8086:2030] error status/mask=000000c0/00002000 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: [ 6] Bad TLP Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: [ 7] Bad DLLP Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: AER: Multiple Corrected error received: id=ae00 Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0000(Receiver ID) Feb 5 15:41:33 hostname kernel: pcieport 10003:00:00.0: device [8086:2030] error status/mask=00000040/00002000", "~]# yum install rasdaemon", "~]# systemctl enable --now rasdaemon", "~]# ras-mc-ctl --summary ~]# ras-mc-ctl --errors" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/kernel_administration_guide/kernel_features
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/proc-providing-feedback-on-redhat-documentation
12.2. XML Representation of a Storage Domain
12.2. XML Representation of a Storage Domain Example 12.1. An XML representation of a storage domain
[ "<storage_domain id=\"fabe0451-701f-4235-8f7e-e20e458819ed\" href=\"/ovirt-engine/api/storagedomains/fabe0451-701f-4235-8f7e-e20e458819ed\"> <name>data0</name> <link rel=\"permissions\" href=\"/ovirt-engine/api/storagedomains/be24cd98-8e23-49c7-b425-1a12bd12abb0/permissions\"/> <link rel=\"files\" href=\"/ovirt-engine/api/storagedomains/be24cd98-8e23-49c7-b425-1a12bd12abb0/files\"/> <type>data</type> <master>true</master> <storage> <type>nfs</type> <address>172.31.0.6</address> <path>/exports/RHEVX/images/0</path> </storage> <available>156766306304</available> <used>433791696896</used> <committed>617401548800</committed> <storage_format>v1</storage_format> <wipe_after_delete>true</wipe_after_delete> <warning_low_space_indicator>10</warning_low_space_indicator> <critical_space_action_blocker>5</critical_space_action_blocker> </storage_domain>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_storage_domain
Chapter 13. Managing Storage for Virtual Machines
Chapter 13. Managing Storage for Virtual Machines This chapter provides information about storage for virtual machines. Virtual storage is abstracted from the physical storage allocated to a virtual machine connection. The storage is attached to the virtual machine using paravirtualized or emulated block device drivers. 13.1. Storage Concepts A storage pool is a quantity of storage set aside for use by guest virtual machines. Storage pools are divided into storage volumes . Each storage volume is assigned to a guest virtual machine as a block device on a guest bus. Storage pools and volumes are managed using libvirt . With libvirt 's remote protocol, it is possible to manage all aspects of a guest virtual machine's life cycle, as well as the configuration of the resources required by the guest virtual machine. These operations can be performed on a remote host. As a result, a management application, such as the Virtual Machine Manager , using libvirt can enable a user to perform all the required tasks for configuring the host physical machine for a guest virtual machine. These include allocating resources, running the guest virtual machine, shutting it down, and de-allocating the resources, without requiring shell access or any other control channel. The libvirt API can be used to query the list of volumes in the storage pool or to get information regarding the capacity, allocation, and available storage in the storage pool. A storage volume in the storage pool may be queried to get information such as allocation and capacity, which may differ for sparse volumes. Note For more information about sparse volumes, see the Virtualization Getting Started Guide . For storage pools that support it, the libvirt API can be used to create, clone, resize, and delete storage volumes. The APIs can also be used to upload data to storage volumes, download data from storage volumes, or wipe data from storage volumes. Once a storage pool is started, a storage volume can be assigned to a guest using the storage pool name and storage volume name instead of the host path to the volume in the domain XML. Note For more information about the domain XML, see Chapter 23, Manipulating the Domain XML . Storage pools can be stopped (destroyed). This removes the abstraction of the data, but keeps the data intact. For example, an NFS server that uses mount -t nfs nfs.example.com:/path/to/share /path/to/data . A storage administrator responsible could define an NFS Storage Pool on the virtualization host to describe the exported server path and the client target path. This will allow libvirt to perform the mount either automatically when libvirt is started or as needed while libvirt is running. Files with the NFS Server exported directory are listed as storage volumes within the NFS storage pool. When the storage volume is added to the guest, the administrator does not need to add the target path to the volume. He just needs to add the storage pool and storage volume by name. Therefore, if the target client path changes, it does not affect the virtual machine. When the storage pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata . If the storage pool is configured to autostart, libvirt ensures that the NFS shared disk is mounted on the directory specified when libvirt is started. Once the storage pool is started, the files in the NFS shared disk are reported as storage volumes, and the storage volumes' paths may be queried using the libvirt API. The storage volumes' paths can then be copied into the section of a guest virtual machine's XML definition that describes the source storage for the guest virtual machine's block devices. In the case of NFS, an application that uses the libvirt API can create and delete storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share). Not all storage pool types support creating and deleting volumes. Stopping the storage pool (pool-destroy) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite what the name of the command suggests. For more details, see man virsh . Procedure 13.1. Creating and Assigning Storage This procedure provides a high-level understanding of the steps needed to create and assign storage for virtual machine guests. Create storage pools Create one or more storage pools from available storage media. For more information, see Section 13.2, "Using Storage Pools" . Create storage volumes Create one or more storage volumes from the available storage pools. For more information, see Section 13.3, "Using Storage Volumes" . Assign storage devices to a virtual machine. Assign one or more storage devices abstracted from storage volumes to a guest virtual machine. For more information, see Section 13.3.6, "Adding Storage Devices to Guests" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-managing_virtual_storage
Chapter 1. Availability
Chapter 1. Availability Red Hat provides a distribution of .NET that enables developers to create applications using the C#, Visual Basic, and F# languages and then deploy them on Red Hat Enterprise Linux (RHEL), Red Hat OpenShift Container Platform, or other platforms. A no-cost Red Hat Enterprise Linux Developer Subscription is available, including a full suite of tools for container development. For RHEL 8.9 and later and RHEL 9.3 and later, .NET 8.0 is available as the following RPMs in the AppStream repositories: Note The AppStream repositories are enabled by default in RHEL 8 and RHEL 9. dotnet-sdk-8.0 : Includes the .NET 8.0 Software Development Kit (SDK) and all the runtimes. aspnetcore-runtime-8.0 : Includes the .NET runtime and the ASP.NET Core runtime. Install this package to run ASP.NET Core-based applications. dotnet-runtime-8.0 : Includes only the .NET 8.0 runtime. Install this to use the runtime without the SDK. .NET 8.0 is available for aarch64 , ppc64le , s390x , and x86_64 architectures on RHEL 8, RHEL 9, and OpenShift Container Platform. Full instructions for installing .NET 8.0 on RHEL are available in the Getting started with .NET on RHEL 8 and Getting started with .NET on RHEL 9 guides.
null
https://docs.redhat.com/en/documentation/net/8.0/html/release_notes_for_.net_8.0_rpm_packages/availability_release-notes-for-dotnet-rpms
Chapter 11. Connecting Red Hat OpenShift to the subscriptions service
Chapter 11. Connecting Red Hat OpenShift to the subscriptions service If you use Red Hat OpenShift products, the steps you must do to connect the correct data collection tools to the subscriptions service depend on multiple factors. These factors include the installed version of Red Hat OpenShift Container Platform and Red Hat OpenShift Dedicated, whether you are working in a connected or disconnected environment, and whether you are using Red Hat Enterprise Linux, Red Hat Enterprise Linux CoreOS, or both as the operating system for clusters. The subscriptions service is designed to work with customers who use Red Hat OpenShift in connected environments. One example of this customer profile is using RHOCP 4.1 and later with an Annual subscription with connected clusters. For this customer profile, Red Hat OpenShift has a robust set of tools that can perform the data collection. The connected clusters report data to Red Hat through Red Hat OpenShift Cluster Manager, Telemetry, and the other monitoring stack tools to supply information to the data pipeline for the subscriptions service. Customers with disconnected RHOCP 4.1 and later environments can use Red Hat OpenShift as a data collection tool by manually creating each cluster in Red Hat OpenShift Cluster Manager. Customers who use Red Hat OpenShift 3.11 can also use the subscriptions service. However, for Red Hat OpenShift version 3.11, the communication with the subscriptions service is enabled through other tools that supply the data pipeline, such as Insights, Satellite, or Red Hat Subscription Management. Note For customers who use Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated 4.7 and later with a pay-as-you-go On-Demand subscription (available for connected clusters only), data collection is done through the same tools as those used by Red Hat OpenShift Container Platform 4.1 and later with an Annual subscription. Procedure Complete the following steps, based on your version of Red Hat OpenShift Container Platform and the cluster operating system for worker nodes. For Red Hat OpenShift Container Platform 4.1 or later with Red Hat Enterprise Linux CoreOS For this profile, cluster architecture is optimized to report data to Red Hat OpenShift Cluster Manager through the Telemetry tool in the monitoring stack. Therefore, setup of the subscriptions service reporting is essentially confirming that this monitoring tool is active. Make sure that all clusters are connected to Red Hat OpenShift Cluster Manager through the Telemetry monitoring component. If so, no additional configuration is needed. The subscriptions service is ready to track Red Hat OpenShift Container Platform usage and capacity. For Red Hat OpenShift Container Platform 4.1 or later with a mixed environment with Red Hat Enterprise Linux CoreOS and Red Hat Enterprise Linux For this profile, data gathering is affected by the change in the Red Hat OpenShift Container Platform reporting models between Red Hat OpenShift major versions 3 and 4. Version 3 relies upon RHEL to report RHEL cluster usage at the node level. This is still the reporting model used for version 4 RHEL nodes. However, the version 4 era reporting model reports Red Hat Enterprise Linux CoreOS usage at the cluster level through Red Hat OpenShift tools. The tools that are used to gather this data are different. Therefore, the setup of the subscriptions service reporting is to confirm that both tool sets are configured correctly. Make sure that all clusters are connected to Red Hat OpenShift Cluster Manager through the Red Hat OpenShift Container Platform Telemetry monitoring component. Make sure that Red Hat Enterprise Linux nodes in all clusters are connected to at least one of the Red Hat Enterprise Linux data collection tools, Insights, Satellite, or Red Hat Subscription Management. For more information, see the instructions about connecting to each of these data collection tools in this guide. For Red Hat OpenShift Container Platform version 3.11 Red Hat OpenShift Container Platform version 3.11 reports cluster usage based on the Red Hat Enterprise Linux nodes in the cluster. Therefore, for this profile, the subscriptions service reporting uses the standard Red Hat Enterprise Linux data collection tools. Make sure that all Red Hat Enterprise Linux nodes in all clusters are connected to at least one of the Red Hat Enterprise Linux data collection tools, Insights, Satellite, or Red Hat Subscription Management. For more information, see the instructions about connecting to each of these data collection tools in this guide.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/proc-connecting-rhocp-to-subscriptionwatch_assembly-setting-up-subscriptionwatch-ctxt
Chapter 14. Installing a cluster on AWS with compute nodes on AWS Local Zones
Chapter 14. Installing a cluster on AWS with compute nodes on AWS Local Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Local Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Local Zone subnets. AWS Local Zones is an infrastructure that place Cloud Resources close to metropolitan regions. For more information, see the AWS Local Zones Documentation . 14.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Local Zones locations to create the network resources in. You read the AWS Local Zones features in the AWS documentation. You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can provide a user or role access for creating network network resources that support AWS Local Zones. Example of an additional IAM policy with the ec2:ModifyAvailabilityZoneGroup permission attached to an IAM user or role. { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 14.2. About AWS Local Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Local Zones environment. 14.2.1. Cluster limitations in AWS Local Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Local Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Local Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Local Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Local Zones subnets in an OpenShift Container Platform cluster. 14.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Local Zones locations. When deploying a cluster that uses Local Zones, consider the following points: Amazon EC2 instances in the Local Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Local Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How Local Zones work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Local Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Local Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=local-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Local Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 14.3. Installation prerequisites Before you install a cluster in an AWS Local Zones environment, you must configure your infrastructure so that it can adopt Local Zone capabilities. 14.3.1. Opting in to an AWS Local Zones If you plan to create subnets in AWS Local Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Local Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Local Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Local Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Local Zones where you want to create subnets. For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a (US East New York). 14.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 14.3.3. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 14.3.4. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 14.3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 14.3.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.4. Preparing for the installation Before you extend nodes to Local Zones, you must prepare certain resources for the cluster installation environment. 14.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 14.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 14.1. Machine types based on 64-bit x86 architecture for AWS Local Zones c5.* c5d.* m6i.* m5.* r5.* t3.* Additional resources See AWS Local Zones features in the AWS documentation. 14.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 14.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Local Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with a custom Amazon Elastic Block Store (EBS) type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Elastic Block Storage (EBS) types differ between locations. Check the AWS documentation to verify availability in the Local Zones in which the cluster runs. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 14.4.5. Customizing the cluster network MTU Before you deploy a cluster on AWS, you can customize the cluster network maximum transmission unit (MTU) for your cluster network to meet the needs of your infrastructure. By default, when you install a cluster with supported Local Zones capabilities, the MTU value for the cluster network is automatically adjusted to the lowest value that the network plugin accepts. Important Setting an unsupported MTU value for EC2 instances that operate in the Local Zones infrastructure can cause issues for your OpenShift Container Platform cluster. If the Local Zone supports higher MTU values in between EC2 instances in the Local Zone and the AWS Region, you can manually configure the higher value to increase the network performance of the cluster network. You can customize the MTU for a cluster by specifying the networking.clusterNetworkMTU parameter in the install-config.yaml configuration file. Important All subnets in Local Zones must support the higher MTU value, so that each node in that zone can successfully communicate with services in the AWS Region and deploy your workloads. Example of overwriting the default MTU value apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Additional resources For more information about the maximum supported maximum transmission unit (MTU) value, see AWS resources supported in Local Zones in the AWS documentation. 14.5. Cluster installation options for an AWS Local Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Local Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Local Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Local Zones environment: Installing a cluster quickly in AWS Local Zones Installing a cluster in an existing VPC with defined Local Zone subnets 14.6. Install a cluster quickly in AWS Local Zones For OpenShift Container Platform 4.15, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Local Zones locations. By using this installation route, the installation program automatically creates network resources and Local Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 14.6.1. Modifying an installation configuration file to use AWS Local Zones Modify an install-config.yaml file to include AWS Local Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Local Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Local Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #... 1 The AWS Region name. 2 The list of Local Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Local Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Local Zones steps Deploying the cluster 14.7. Installing a cluster in an existing VPC that has Local Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Local Zones. Local Zone subnets extend regular compute nodes to edge networks. Each edge compute nodes runs a user workload. After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge compute nodes to create user workloads in Local Zone subnets. Note If you want to create private subnets, you must either modify the provided CloudFormation template or create your own template. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 14.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Local Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Local Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Local Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 14.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 14.2. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 14.7.3. Creating subnets in Local Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Local Zones. Complete the following procedure for each Local Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Local Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 7 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 8 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 14.7.4. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones infrastructure. Example 14.3. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 14.7.5. Modifying an installation configuration file to use AWS Local Zones subnets Modify your install-config.yaml file to include Local Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Local Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Local Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Local Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1 # ... 1 List of subnet IDs created in the zones: Availability and Local Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 14.8. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Edge compute pools and AWS Local Zones". 14.9. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Local Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Local Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Local Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Local Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Local Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 14.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 14.11. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Local Zones. 14.11.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.11.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources For more information about accessing and understanding the OpenShift Container Platform web console, see Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 14.11.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Local Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f 14.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources For more information about the Telemetry service, see About remote health monitoring . steps Validating an installation . If necessary, you can opt out of remote health .
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #", "apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1", "./openshift-install create manifests --dir <installation_directory>", "spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-aws-localzone
Chapter 10. Securing Kafka
Chapter 10. Securing Kafka A secure deployment of Streams for Apache Kafka might encompass one or more of the following security measures: Encryption Streams for Apache Kafka supports Transport Layer Security (TLS), a protocol for encrypted communication. Communication is always encrypted between Streams for Apache Kafka components. To set up TLS-encrypted communication between Kafka and clients, you configure listeners in the Kafka custom resource. Authentication Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster. Clients can also be configured for mutual authentication. Security credentials are created and managed by the Cluster and User Operator. Supported authentication mechanisms: mTLS authentication (on listeners with TLS-enabled encryption) SASL SCRAM-SHA-512 OAuth 2.0 token based authentication Custom authentication (supported by Kafka) Authorization Authorization controls the operations that are permitted on Kafka brokers by specific clients or users. Supported authorization mechanisms: Simple authorization using ACL rules OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication) Open Policy Agent (OPA) authorization Custom authorization (supported by Kafka) Federal Information Processing Standards (FIPS) Streams for Apache Kafka is designed to run on FIPS-enabled OpenShift clusters to ensure data security and system interoperability. For more information about the NIST validation program and validated modules, see Cryptographic Module Validation Program on the NIST website.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/security-overview_str
Chapter 17. Multiple networks
Chapter 17. Multiple networks 17.1. Understanding multiple networks In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. 17.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance You can send traffic on two different planes to manage how much traffic is along each plane. Security You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created. 17.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. vlan : Configure a vlan-based additional network to allow VLAN-based network isolation and connectivity for pods. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. tap : Configure a tap-based additional network to create a tap device inside the container namespace. A tap device enables user space programs to send and receive network packets. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 17.2. Configuring an additional network As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported: Bridge Host device VLAN IPVLAN MACVLAN TAP OVN-Kubernetes 17.2.1. Approaches to managing an additional network You can manage the lifecycle of an additional network in OpenShift Container Platform by using one of two approaches: modifying the Cluster Network Operator (CNO) configuration or applying a YAML manifest. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure. The two different approaches are summarized here: Modifying the Cluster Network Operator (CNO) configuration: Configuring additional networks through CNO is only possible for cluster administrators. The CNO automatically creates and manages the NetworkAttachmentDefinition object. By using this approach, you can define NetworkAttachmentDefinition objects at install time through configuration of the install-config . Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. Compared to modifying the CNO configuration, this approach gives you more granular control and flexibility when it comes to configuration. Note When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN Kubernetes, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface: USD openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id> 17.2.2. IP address assignment for additional networks For additional networks, IP addresses can be assigned using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment. The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components: CNI Plugin : Responsible for integrating with the Kubernetes networking stack to request and release IP addresses. DHCP IPAM CNI Daemon : A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself. For networks requiring type: dhcp in their IPAM configuration, ensure the following: A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer's existing network infrastructure. The DHCP server is appropriately configured to serve IP addresses to the nodes. In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server. Note Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations. A DHCP lease must be periodically renewed throughout the container's lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the additional network setup. Additional resources Dynamic IP address (DHCP) assignment configuration Dynamic IP address assignment configuration with Whereabouts 17.2.3. Configuration for an additional network attachment An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. Important Do not store any sensitive information or a secret in the NetworkAttachmentDefinition CRD because this information is accessible by the project administration user. The configuration for the API is described in the following table: Table 17.1. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plugin configuration in JSON format. 17.2.3.1. Configuration of an additional network through the Cluster Network Operator The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration. The following YAML describes the configuration parameters for managing an additional network with the CNO: Cluster Network Operator configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw 1 An array of one or more additional network configurations. 2 The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 3 The namespace to create the network attachment in. If you do not specify a value then the default namespace is used. Important To prevent namespace issues for the OVN-Kubernetes network plugin, do not name your additional network attachment default , because this namespace is reserved for the default additional network attachment. 4 A CNI plugin configuration in JSON format. 17.2.3.2. Configuration of an additional network from a YAML manifest The configuration for an additional network is specified from a YAML configuration file, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... } 1 The name for the additional network attachment that you are creating. 2 A CNI plugin configuration in JSON format. 17.2.4. Configurations for additional network types The specific configuration fields for additional networks is described in the following sections. 17.2.4.1. Configuration for a bridge additional network The following object describes the configuration parameters for the Bridge CNI plugin: Table 17.2. Bridge CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: bridge . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. bridge string Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipMasq boolean Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Optional: Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Optional: Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. preserveDefaultVlan string Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true. vlanTrunk list Optional: Assign a VLAN trunk tag. The default value is none . mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. enabledad boolean Optional: Enables duplicate address detection for the container side veth . The default value is false . macspoofchk boolean Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false . Note The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface. Note To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command: USD bridge vlan add vid VLAN_ID dev DEV 17.2.4.1.1. Bridge CNI plugin configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "bridge-net", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 17.2.4.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plugin: Table 17.3. Host device CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . 17.2.4.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "hostdev-net", "type": "host-device", "device": "eth1" } 17.2.4.3. Configuration for a VLAN additional network The following object describes the configuration parameters for the VLAN, vlan , CNI plugin: Table 17.4. VLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: vlan . master string The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. vlanId integer Set the ID of the vlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. dns integer Optional: DNS information to return. For example, a priority-ordered list of DNS nameservers. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Important A NetworkAttachmentDefinition custom resource definition (CRD) with a vlan configuration can be used only on a single pod in a node because the CNI plugin cannot create multiple vlan subinterfaces with the same vlanId on the same master interface. 17.2.4.3.1. VLAN configuration example The following example demonstrates a vlan configuration with an additional network that is named vlan-net : { "name": "vlan-net", "cniVersion": "0.3.1", "type": "vlan", "master": "eth0", "mtu": 1500, "vlanId": 5, "linkInContainer": false, "ipam": { "type": "host-local", "subnet": "10.1.1.0/24" }, "dns": { "nameservers": [ "10.1.1.1", "8.8.8.8" ] } } 17.2.4.4. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN, ipvlan , CNI plugin: Table 17.5. IPVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: ipvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. mode string Optional: The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Important The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container is not able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol ( PTP ). A single master interface cannot simultaneously be configured to use both macvlan and ipvlan . For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the result is used to configure the ipvlan interface. 17.2.4.4.1. IPVLAN CNI plugin configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "eth1", "linkInContainer": false, "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 17.2.4.5. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin: Table 17.6. MACVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: macvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mode string Optional: Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. mtu integer Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Note If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts. 17.2.4.5.1. MACVLAN CNI plugin configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "linkInContainer": false, "mode": "bridge", "ipam": { "type": "dhcp" } } 17.2.4.6. Configuration for a TAP additional network The following object describes the configuration parameters for the TAP CNI plugin: Table 17.7. TAP CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: tap . mac string Optional: Request the specified MAC address for the interface. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. selinuxcontext string Optional: The SELinux context to associate with the tap device. Note The value system_u:system_r:container_t:s0 is required for OpenShift Container Platform. multiQueue boolean Optional: Set to true to enable multi-queue. owner integer Optional: The user owning the tap device. group integer Optional: The group owning the tap device. bridge string Optional: Set the tap device as a port of an already existing bridge. 17.2.4.6.1. Tap configuration example The following example configures an additional network named mynet : { "name": "mynet", "cniVersion": "0.3.1", "type": "tap", "mac": "00:11:22:33:44:55", "mtu": 1500, "selinuxcontext": "system_u:system_r:container_t:s0", "multiQueue": true, "owner": 0, "group": 0 "bridge": "br1" } 17.2.4.6.2. Setting SELinux boolean for the TAP CNI plugin To create the tap device with the container_t SELinux context, enable the container_use_devices boolean on the host by using the Machine Config Operator (MCO). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML file named, such as setsebool-container-use-devices.yaml , with the following details: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target Create the new MachineConfig object by running the following command: USD oc apply -f setsebool-container-use-devices.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. This update can take some time to be applied. Verify the change is applied by running the following command: USD oc get machineconfigpools Expected output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d Note All nodes should be in the updated and ready state. Additional resources For more information about enabling an SELinux boolean on a node, see Setting SELinux booleans 17.2.4.7. Configuration for an OVN-Kubernetes additional network The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD). Note Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CRD. You can configure an OVN-Kubernetes additional network in either layer 2 or localnet topologies. A layer 2 topology supports east-west cluster traffic, but does not allow access to the underlying physical network. A localnet topology allows connections to the physical network, but requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes. The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks. Note Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported. 17.2.4.7.1. Supported platforms for OVN-Kubernetes additional network You can use an OVN-Kubernetes additional network with the following supported platforms: Bare metal IBM Power(R) IBM Z(R) IBM(R) LinuxONE VMware vSphere Red Hat OpenStack Platform (RHOSP) 17.2.4.7.2. OVN-Kubernetes network plugin JSON configuration table The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin: Table 17.8. OVN-Kubernetes network plugin JSON configuration table Field Type Description cniVersion string The CNI specification version. The required value is 0.3.1 . name string The name of the network. These networks are not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition CRDs that exist on two different namespaces. This ensures that pods making use of the NetworkAttachmentDefinition CRD on their own different namespaces can communicate over the same secondary network. However, those two different NetworkAttachmentDefinition CRDs must also share the same network specific parameters such as topology , subnets , mtu , and excludeSubnets . type string The name of the CNI plugin to configure. This value must be set to ovn-k8s-cni-overlay . topology string The topological configuration for the network. Must be one of layer2 or localnet . subnets string The subnet to use for the network across the cluster. For "topology":"layer2" deployments, IPv6 ( 2001:DBB::/64 ) and dual-stack ( 192.168.100.0/24,2001:DBB::/64 ) subnets are supported. When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. mtu string The maximum transmission unit (MTU). The default value, 1300 , is automatically set by the kernel. netAttachDefName string The metadata namespace and name of the network attachment definition CRD where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition CRD in namespace ns1 named l2-network , this should be set to ns1/l2-network . excludeSubnets string A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. vlanID integer If topology is set to localnet , the specified VLAN tag is assigned to traffic from this additional network. The default is to not assign a VLAN tag. 17.2.4.7.3. Compatibility with multi-network policy The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field. Refer to the following table for details: Table 17.9. Supported multi-network policy selectors based on subnets CNI configuration subnets field specified Allowed multi-network policy selectors Yes podSelector and namespaceSelector ipBlock No ipBlock For example, the following multi-network policy is valid only if the subnets field is defined in the additional network CNI configuration for the additional network named blue2 : Example multi-network policy that uses a pod selector apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {} The following example uses the ipBlock network policy selector, which is always valid for an OVN-Kubernetes additional network: Example multi-network policy that uses an IP block selector apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30 17.2.4.7.4. Configuration for a layer 2 switched topology The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments. Note Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster. The following JSON example configures a switched secondary network: { "cniVersion": "0.3.1", "name": "l2-network", "type": "ovn-k8s-cni-overlay", "topology":"layer2", "subnets": "10.100.200.0/24", "mtu": 1300, "netAttachDefName": "ns1/l2-network", "excludeSubnets": "10.100.200.0/29" } 17.2.4.7.5. Configuration for a localnet topology The switched localnet topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network. 17.2.4.7.5.1. Prerequisites for configuring OVN-Kubernetes additional network The NMState Operator is installed. For more information, see Kubernetes NMState Operator . 17.2.4.7.5.2. Configuration for an OVN-Kubernetes additional network mapping You must map an additional network to the OVN bridge to use it as an OVN-Kubernetes additional network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS). You can create an NodeNetworkConfigurationPolicy object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: '' . When attaching an additional network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure. If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly. If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your additional network. This approach provides for traffic isolation from your primary cluster network. The localnet1 network is mapped to the br-ex bridge in the following example: Example mapping for sharing a bridge apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5 1 The name for the configuration object. 2 A node selector that specifies the nodes to apply the node network configuration policy to. 3 The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes additional network. 4 The name of the OVS bridge on the node. This value is required only if you specify state: present . 5 The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . In the following example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as an additional network. Example mapping for nodes with multiple interfaces apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7 1 The name for the configuration object. 2 A node selector that specifies the nodes to apply the node network configuration policy to. 3 A new OVS bridge, separate from the default bridge used by OVN-Kubernetes for all cluster traffic. 4 A network device on the host system to associate with this new OVS bridge. 5 The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes additional network. 6 The name of the OVS bridge on the node. This value is required only if you specify state: present . 7 The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . This declarative approach is recommended because the NMState Operator applies additional network configuration to all nodes specified by the node selector automatically and transparently. The following JSON example configures a localnet secondary network: { "cniVersion": "0.3.1", "name": "ns1-localnet-network", "type": "ovn-k8s-cni-overlay", "topology":"localnet", "subnets": "202.10.130.112/28", "vlanID": 33, "mtu": 1500, "netAttachDefName": "ns1/localnet-network" "excludeSubnets": "10.100.200.0/29" } 17.2.4.7.6. Configuring pods for additional networks You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation. The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 17.2.4.7.7. Configuring pods with a static IP address The following example provisions a pod with a static IP address. Note You can only specify the IP address for a pod's secondary network attachment for layer 2 attachments. Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "l2-network", 1 "mac": "02:03:04:05:06:07", 2 "interface": "myiface1", 3 "ips": [ "192.0.2.20/24" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 1 The name of the network. This value must be unique across all NetworkAttachmentDefinition CRDs. 2 The MAC address to be assigned for the interface. 3 The name of the network interface to be created for the pod. 4 The IP addresses to be assigned to the network interface. 17.2.5. Configuration of IP address assignment for a network attachment The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 17.2.5.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 17.10. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 17.11. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 17.12. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 17.13. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 17.2.5.2. Dynamic IP address (DHCP) assignment configuration A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. Important For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... The following table describes the configuration parameters for dynamic IP address address assignment with DHCP. Table 17.14. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. The following JSON example describes the configuration p for dynamic IP address address assignment with DHCP. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 17.2.5.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate NetworkAttachmentDefinition CRDs. This provides greater flexibility and management capabilities in multi-tenant environments. 17.2.5.3.1. Dynamic IP address configuration objects The following table describes the configuration objects for dynamic IP address assignment with Whereabouts: Table 17.15. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. network_name string Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments. 17.2.5.3.2. Dynamic IP address assignment configuration that uses Whereabouts The following example shows a dynamic address assignment configuration that uses Whereabouts: Whereabouts dynamic IP address assignment { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 17.2.5.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multi-tenant networks. NetworkAttachmentDefinition 1 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/29", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 2 . NetworkAttachmentDefinition 2 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/24", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 1 . 17.2.5.4. Creating a whereabouts-reconciler daemon set The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down. Note You can also use a NetworkAttachmentDefinition custom resource definition (CRD) for dynamic IP address assignment. The whereabouts-reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest. To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file. Use the following procedure to deploy the whereabouts-reconciler daemon set. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Include the additionalNetworks section shown in this example YAML extract within the spec definition of the custom resource (CR): apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster # ... spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw # ... Save the file and exit the text editor. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s 17.2.5.5. Configuring the Whereabouts IP reconciler schedule The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them. Use this procedure to change the frequency at which the IP reconciler runs. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have deployed the whereabouts-reconciler daemon set, and the whereabouts-reconciler pods are up and running. Procedure Run the following command to create a ConfigMap object named whereabouts-config in the openshift-multus namespace with a specific cron expression for the IP reconciler: USD oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *" This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements. Note The whereabouts-reconciler daemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported. Retrieve information about resources related to the whereabouts-reconciler daemon set and pods within the openshift-multus namespace by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s Run the following command to verify that the whereabouts-reconciler pod runs the IP reconciler with the configured interval: USD oc -n openshift-multus logs whereabouts-reconciler-2p7hw Example output 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success 17.2.5.6. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a Additional resources Attaching a pod to an additional network 17.2.6. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition CRD automatically. Important Do not edit the NetworkAttachmentDefinition CRDs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Optional: Create the namespace for the additional networks: USD oc create namespace <namespace_name> To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition CRD by running the following command. There might be a delay before the CNO creates the CRD. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 17.2.7. Creating an additional network attachment by applying a YAML manifest Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 17.2.8. About configuring the master interface in the container network namespace You can create a MAC-VLAN, an IP-VLAN, or a VLAN subinterface that is based on a master interface that exists in a container namespace. You can also create a master interface as part of the pod network configuration in a separate network attachment definition CRD. To use a container namespace master interface, you must specify true for the linkInContainer parameter that exists in the subinterface configuration of the NetworkAttachmentDefinition CRD. 17.2.8.1. Creating multiple VLANs on SR-IOV VFs An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces. The following example shows how to configure the setup illustrated in this diagram. Figure 17.1. Creating VLANs Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. Procedure Create a dedicated container namespace where you want to deploy your pod by using the following command: USD oc new-project test-namespace Create an SR-IOV node policy: Create an SriovNetworkNodePolicy object, and then save the YAML in the sriov-node-network-policy.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: "15b3" 1 deviceID: "101b" 2 rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" Note The SR-IOV network node policy configuration example, with the setting deviceType: netdevice , is tailored specifically for Mellanox Network Interface Cards (NICs). 1 The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. 2 The device hexadecimal code of the SR-IOV network device. Apply the YAML by running the following command: USD oc apply -f sriov-node-network-policy.yaml Note Applying this might take some time due to the node requiring a reboot. Create an SR-IOV network: Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on" Apply the YAML by running the following command: USD oc apply -f sriov-network-attachment.yaml Create the VLAN additional network: Using the following YAML example, create a file named vlan100-additional-network-configuration.yaml : apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { "cniVersion": "0.4.0", "name": "vlan-100", "plugins": [ { "type": "vlan", "master": "ext0", 1 "mtu": 1500, "vlanId": 100, "linkInContainer": true, 2 "ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]} } ] } 1 The VLAN configuration needs to specify the master name. This can be configured in the pod networks annotation. 2 The linkInContainer parameter must be specified. Apply the YAML file by running the following command: USD oc apply -f vlan100-additional-network-configuration.yaml Create a pod definition by using the earlier specified networks: Using the following YAML example, create a file named pod-a.yaml file: Note The manifest below includes 2 resources: Namespace with security labels Pod definition with appropriate network annotation apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" 1 }, { "name": "vlan-100", "namespace": "test-namespace", "interface": "ext0.100" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] ports: - containerPort: 80 seccompProfile: type: "RuntimeDefault" 1 The name to be used as the master for the VLAN interface. Apply the YAML file by running the following command: USD oc apply -f pod-a.yaml Get detailed information about the nginx-pod within the test-namespace by running the following command: USD oc describe pods nginx-pod -n test-namespace Example output Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0... k8s.v1.cni.cncf.io/network-status: [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.26" ], "mac": "0a:58:0a:83:00:1a", "default": true, "dns": {} },{ "name": "test-namespace/sriov-network", "interface": "ext0", "mac": "6e:a7:5e:3f:49:1b", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:d8:00.2" } } },{ "name": "test-namespace/vlan-100", "interface": "ext0.100", "ips": [ "1.1.1.1" ], "mac": "6e:a7:5e:3f:49:1b", "dns": {} }] k8s.v1.cni.cncf.io/networks: [ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i... openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26 17.2.8.2. Creating a subinterface based on a bridge master interface in a container namespace You can create a subinterface based on a bridge master interface that exists in a container namespace. Creating a subinterface can be applied to other types of interfaces. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create a dedicated container namespace where you want to deploy your pod by entering the following command: USD oc new-project test-namespace Using the following YAML example, create a bridge NetworkAttachmentDefinition custom resource definition (CRD) file named bridge-nad.yaml : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ "cniVersion": "0.4.0", "name": "bridge-network", "type": "bridge", "bridge": "br-001", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.0.0.0/24", "routes": [{"dst": "0.0.0.0/0"}] } }' Run the following command to apply the NetworkAttachmentDefinition CRD to your OpenShift Container Platform cluster: USD oc apply -f bridge-nad.yaml Verify that you successfully created a NetworkAttachmentDefinition CRD by entering the following command: USD oc get network-attachment-definitions Example output NAME AGE bridge-network 15s Using the following YAML example, create a file named ipvlan-additional-network-configuration.yaml for the IPVLAN additional network configuration: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "ext0", 1 "mode": "l3", "linkInContainer": true, 2 "ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]} }' 1 Specifies the ethernet interface to associate with the network attachment. This is subsequently configured in the pod networks annotation. 2 Specifies that the master interface is in the container network namespace. Apply the YAML file by running the following command: USD oc apply -f ipvlan-additional-network-configuration.yaml Verify that the NetworkAttachmentDefinition CRD has been created successfully by running the following command: USD oc get network-attachment-definitions Example output NAME AGE bridge-network 87s ipvlan-net 9s Using the following YAML example, create a file named pod-a.yaml for the pod definition: apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "bridge-network", "interface": "ext0" 1 }, { "name": "ipvlan-net", "interface": "ext1" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Specifies the name to be used as the master for the IPVLAN interface. Apply the YAML file by running the following command: USD oc apply -f pod-a.yaml Verify that the pod is running by using the following command: USD oc get pod -n test-namespace Example output NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s Show network interface information about the pod-a resource within the test-namespace by running the following command: USD oc exec -n test-namespace pod-a -- ip a Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever This output shows that the network interface ext1 is associated with the physical interface ext0 . 17.3. About virtual routing and forwarding 17.3.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 17.3.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 17.4. Configuring multi-network policy As a cluster administrator, you can configure a multi-network policy for a Single-Root I/O Virtualization (SR-IOV), MAC Virtual Local Area Network (MacVLAN), or OVN-Kubernetes additional networks. MacVLAN additional networks are fully supported. Other types of additional networks, such as IP Virtual Local Area Network (IPVLAN), are not supported. Note Support for configuring multi-network policies for SR-IOV additional networks is only supported with kernel network interface controllers (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications. 17.4.1. Differences between multi-network policy and network policy Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences: You must use the MultiNetworkPolicy API: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the macvlan or SR-IOV additional network: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> where: <network_name> Specifies the name of a network attachment definition. 17.4.2. Enabling multi-network policy for the cluster As a cluster administrator, you can enable multi-network policy support on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure Create the multinetwork-enable-patch.yaml file with the following YAML: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true Configure the cluster to enable multi-network policy: USD oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml Example output network.operator.openshift.io/cluster patched 17.4.3. Supporting multi-network policies in IPv6 networks The ICMPv6 Neighbor Discovery Protocol (NDP) is a set of messages and processes that enable devices to discover and maintain information about neighboring nodes. NDP plays a crucial role in IPv6 networks, facilitating the interaction between devices on the same link. The Cluster Network Operator (CNO) deploys the iptables implementation of multi-network policy when the useMultiNetworkPolicy parameter is set to true . To support multi-network policies in IPv6 networks the Cluster Network Operator deploys the following set of rules in every pod affected by a multi-network policy: Multi-network policy custom rules kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4 1 This rule allows incoming ICMPv6 neighbor solicitation messages, which are part of the neighbor discovery protocol (NDP). These messages help determine the link-layer addresses of neighboring nodes. 2 This rule allows incoming ICMPv6 neighbor advertisement messages, which are part of NDP and provide information about the link-layer address of the sender. 3 This rule permits incoming ICMPv6 router solicitation messages. Hosts use these messages to request router configuration information. 4 This rule allows incoming ICMPv6 router advertisement messages, which give configuration information to hosts. Note You cannot edit these predefined rules. These rules collectively enable essential ICMPv6 traffic for correct network functioning, including address resolution and router communication in an IPv6 environment. With these rules in place and a multi-network policy denying traffic, applications are not expected to experience connectivity issues. 17.4.4. Working with multi-network policy As a cluster administrator, you can create, edit, view, and delete multi-network policies. 17.4.4.1. Prerequisites You have enabled multi-network policy support for your cluster. 17.4.4.2. Creating a multi-network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the multi-network policy file name. Define a multi-network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: [] where: <network_name> Specifies the name of a network attachment definition. Allow ingress from all pods in the same namespace apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {} where: <network_name> Specifies the name of a network attachment definition. Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y where: <network_name> Specifies the name of a network attachment definition. Restrict traffic to a service This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore . In this example the application could be a REST API server, marked with labels app=bookstore and role=api . This example addresses the following use cases: Restricting the traffic to a service to only the other microservices that need to use it. Restricting the connections to a database to only permit the application using it. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore where: <network_name> Specifies the name of a network attachment definition. To create the multi-network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the multi-network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 17.4.4.3. Editing a multi-network policy You can edit a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure Optional: To list the multi-network policy objects in a namespace, enter the following command: USD oc get multi-networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the multi-network policy object. If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the multi-network policy object directly, enter the following command: USD oc edit multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the multi-network policy object is updated. USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 17.4.4.4. Viewing multi-network policies using the CLI You can examine the multi-network policies in a namespace. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure List multi-network policies in a namespace: To view multi-network policy objects defined in a namespace, enter the following command: USD oc get multi-networkpolicy Optional: To examine a specific multi-network policy, enter the following command: USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 17.4.4.5. Deleting a multi-network policy using the CLI You can delete a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure To delete a multi-network policy object, enter the following command: USD oc delete multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 17.4.4.6. Creating a default deny all multi-network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6 1 namespace: default deploys this policy to the default namespace. 2 network_name : specifies the name of a network attachment definition. 3 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 4 policyTypes: a list of rule types that the NetworkPolicy relates to. 5 Specifies as Ingress only policyType . 6 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created 17.4.4.7. Creating a multi-network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 17.4.4.8. Creating a multi-network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 17.4.4.9. Creating a multi-network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 17.4.5. Additional resources About network policy Understanding multiple networks Configuring a macvlan network Configuring an SR-IOV network device 17.5. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 17.5.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 17.5.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift CLI ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/network-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plugin configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin: macvlan CNI plugin JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities. 4 Specifies the interface that the macvlan plugin uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plugin. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plugin JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 17.6. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 17.6.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 17.7. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 17.7.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 17.8. Removing an additional network As a cluster administrator you can remove an additional network attachment. 17.8.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 17.9. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify. Using a secondary network with a VRF instance has the following advantages: Workload isolation Isolate workload traffic by configuring a VRF instance for the additional network. Improved security Enable improved security through isolated network paths in the VRF domain. Multi-tenancy support Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant. Note Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1 . To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. Additional resources About virtual routing and forwarding 17.9.1. Creating an additional network attachment with the CNI VRF plugin The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", 2 "vrfname": "vrf-1", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verification Create a pod and assign it to the additional network with the VRF instance: Create a YAML file that defines the Pod resource: Example pod-additional-net.yaml file apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "test-network-1" 1 } ]' spec: containers: - name: example-pod-1 command: ["/bin/bash", "-c", "sleep 9000000"] image: centos:8 1 Specify the name of the additional network with the VRF instance. Create the Pod resource by running the following command: USD oc create -f pod-additional-net.yaml Example output pod/test-pod created Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- vrf-1 1001 Confirm that the VRF interface is the controller for the additional interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
[ "openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "bridge vlan add vid VLAN_ID dev DEV", "{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }", "{ \"name\": \"vlan-net\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"master\": \"eth0\", \"mtu\": 1500, \"vlanId\": 5, \"linkInContainer\": false, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.1.0/24\" }, \"dns\": { \"nameservers\": [ \"10.1.1.1\", \"8.8.8.8\" ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"name\": \"mynet\", \"cniVersion\": \"0.3.1\", \"type\": \"tap\", \"mac\": \"00:11:22:33:44:55\", \"mtu\": 1500, \"selinuxcontext\": \"system_u:system_r:container_t:s0\", \"multiQueue\": true, \"owner\": 0, \"group\": 0 \"bridge\": \"br1\" }", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target", "oc apply -f setsebool-container-use-devices.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30", "{ \"cniVersion\": \"0.3.1\", \"name\": \"l2-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"layer2\", \"subnets\": \"10.100.200.0/24\", \"mtu\": 1300, \"netAttachDefName\": \"ns1/l2-network\", \"excludeSubnets\": \"10.100.200.0/29\" }", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ns1-localnet-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"localnet\", \"subnets\": \"202.10.130.112/28\", \"vlanID\": 33, \"mtu\": 1500, \"netAttachDefName\": \"ns1/localnet-network\" \"excludeSubnets\": \"10.100.200.0/29\" }", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"l2-network\", 1 \"mac\": \"02:03:04:05:06:07\", 2 \"interface\": \"myiface1\", 3 \"ips\": [ \"192.0.2.20/24\" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s", "oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression=\"*/15 * * * *\"", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s", "oc -n openshift-multus logs whereabouts-reconciler-2p7hw", "2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..data_tmp\": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file \"/cron-schedule/..data\". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id \"00c2d1c9-631d-403f-bb86-73ad104a6817\" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/config\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_26_17.3874177937\": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "oc create namespace <namespace_name>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "oc new-project test-namespace", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: \"15b3\" 1 deviceID: \"101b\" 2 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc apply -f sriov-node-network-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"", "oc apply -f sriov-network-attachment.yaml", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { \"cniVersion\": \"0.4.0\", \"name\": \"vlan-100\", \"plugins\": [ { \"type\": \"vlan\", \"master\": \"ext0\", 1 \"mtu\": 1500, \"vlanId\": 100, \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"1.1.1.0/24\"}]} } ] }", "oc apply -f vlan100-additional-network-configuration.yaml", "apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" 1 }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"interface\": \"ext0.100\" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] ports: - containerPort: 80 seccompProfile: type: \"RuntimeDefault\"", "oc apply -f pod-a.yaml", "oc describe pods nginx-pod -n test-namespace", "Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {\"default\":{\"ip_addresses\":[\"10.131.0.26/23\"],\"mac_address\":\"0a:58:0a:83:00:1a\",\"gateway_ips\":[\"10.131.0.1\"],\"routes\":[{\"dest\":\"10.128.0.0 k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.26\" ], \"mac\": \"0a:58:0a:83:00:1a\", \"default\": true, \"dns\": {} },{ \"name\": \"test-namespace/sriov-network\", \"interface\": \"ext0\", \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {}, \"device-info\": { \"type\": \"pci\", \"version\": \"1.0.0\", \"pci\": { \"pci-address\": \"0000:d8:00.2\" } } },{ \"name\": \"test-namespace/vlan-100\", \"interface\": \"ext0.100\", \"ips\": [ \"1.1.1.1\" ], \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {} }] k8s.v1.cni.cncf.io/networks: [ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"i openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26", "oc new-project test-namespace", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"bridge-network\", \"type\": \"bridge\", \"bridge\": \"br-001\", \"isGateway\": true, \"ipMasq\": true, \"hairpinMode\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.0.0.0/24\", \"routes\": [{\"dst\": \"0.0.0.0/0\"}] } }'", "oc apply -f bridge-nad.yaml", "oc get network-attachment-definitions", "NAME AGE bridge-network 15s", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"ext0\", 1 \"mode\": \"l3\", \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"10.0.0.0/24\"}]} }'", "oc apply -f ipvlan-additional-network-configuration.yaml", "oc get network-attachment-definitions", "NAME AGE bridge-network 87s ipvlan-net 9s", "apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"bridge-network\", \"interface\": \"ext0\" 1 }, { \"name\": \"ipvlan-net\", \"interface\": \"ext1\" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f pod-a.yaml", "oc get pod -n test-namespace", "NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s", "oc exec -n test-namespace pod-a -- ip a", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true", "oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml", "network.operator.openshift.io/cluster patched", "kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4", "touch <policy_name>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: []", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore", "oc apply -f <policy_name>.yaml -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "oc get multi-networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit multi-networkpolicy <policy_name> -n <namespace>", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc get multi-networkpolicy", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc delete multi-networkpolicy <policy_name> -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6", "oc apply -f deny-by-default.yaml", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8", "oc create -f pod-additional-net.yaml", "pod/test-pod created", "ip vrf show", "Name Table ----------------------- vrf-1 1001", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/multiple-networks
Chapter 3. New features
Chapter 3. New features This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage. The main features added by this release are: Containerized Cluster Red Hat Ceph Storage 5 supports only containerized daemons. It does not support non-containerized storage clusters. If you are upgrading a non-containerized storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the upgrade process includes the conversion to a containerized deployment. For more information, see the Upgrading a Red Hat Ceph Storage cluster from RHCS 4 to RHCS 5 section in the Red Hat Ceph Storage Installation Guide for more details. Cephadm Cephadm is a new containerized deployment tool that deploys and manages a Red Hat Ceph Storage 5 cluster by connecting to hosts from the manager daemon. The cephadm utility replaces ceph-ansible for Red Hat Ceph Storage deployment. The goal of Cephadm is to provide a fully-featured, robust, and well installed management layer for running Red Hat Ceph Storage. The cephadm command manages the full lifecycle of a Red Hat Ceph Storage cluster. The cephadm command can perform the following operations: Bootstrap a new Ceph storage cluster. Launch a containerized shell that works with the Ceph command-line interface (CLI). Aid in debugging containerized daemons. The cephadm command uses ssh to communicate with the nodes in the storage cluster and add, remove, or update Ceph daemon containers. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools. The cephadm command has two main components: The cephadm shell launches a bash shell within a container. This enables you to run storage cluster installation and setup tasks, as well as to run ceph commands in the container. The cephadm orchestrator commands enable you to provision Ceph daemons and services, and to expand the storage cluster. For more information, see the Red Hat Ceph Storage Installation Guide . Management API The management API creates management scripts that are applicable for Red Hat Ceph Storage 5 and continues to operate unchanged for the version lifecycle. The incompatible versioning of the API would only happen across major release lines. For more information, see the Red Hat Ceph Storage Developer Guide . Disconnected installation of Red Hat Ceph Storage Red Hat Ceph Storage 5 supports the disconnected installation and bootstrapping of storage clusters on private networks. A disconnected installation uses custom images and configuration files and local hosts, instead of downloading files from the network. You can install container images that you have downloaded from a proxy host that has access to the Red Hat registry, or by copying a container image to your local registry. The bootstrapping process requires a specification file that identifies the hosts to be added by name and IP address. Once the initial monitor host has been bootstrapped, you can use Ceph Orchestrator commands to expand and configure the storage cluster. See the Red Hat Ceph Storage Installation Guide for more details. Ceph File System geo-replication Starting with the Red Hat Ceph Storage 5 release, you can replicate Ceph File Systems (CephFS) across geographical locations or between different sites. The new cephfs-mirror daemon does asynchronous replication of snapshots to a remote CephFS. See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more details. A new Ceph File System client performance tool Starting with the Red Hat Ceph Storage 5 release, the Ceph File System (CephFS) provides a top -like utility to display metrics on Ceph File Systems in realtime. The cephfs-top utility is a curses -based Python script that uses the Ceph Manager stats module to fetch and display client performance metrics. See the Using the cephfs-top utility section in the Red Hat Ceph Storage File System Guide for more details. Monitoring the Ceph object gateway multisite using the Red Hat Ceph Storage Dashboard The Red Hat Ceph Storage dashboard can now be used to monitor an Ceph object gateway multisite configuration. After the multi-zones are set-up using the cephadm utility, the buckets of one zone is visible to other zones and other sites. You can also create, edit, delete buckets on the dashboard. See the Management of buckets of a multisite object configuration on the Ceph dashboard chapter in the Red Hat Ceph Storage Dashboard Guide for more details. Improved BlueStore space utilization The Ceph Object Gateway and the Ceph file system (CephFS) stores small objects and files as individual objects in RADOS. With this release, the default value of BlueStore's min_alloc_size for SSDs and HDDs is 4 KB. This enables better use of space with no impact on performance. See the OSD BlueStore chapter in the Red Hat Ceph Storage Administration Guide for more details. 3.1. The Cephadm utility cephadm supports colocating multiple daemons on the same host With this release, multiple daemons, such as Ceph Object Gateway and Ceph Metadata Server (MDS), can be deployed on the same host thereby providing an additional performance benefit. Example For single node deployments, cephadm requires atleast two running Ceph Manager daemons in upgrade scenarios. It is still highly recommended even outside of upgrade scenarios but the storage cluster will function without it. Configuration of NFS-RGW using Cephadm is now supported In Red Hat Ceph Storage 5.0 configuration, use of NFS-RGW required the use of dashboard as a workaround and it was recommended for such users to delay upgrade until Red Hat Ceph Storage 5.1 With this release, NFS-RGW configuration is supported and the users with this configuration can upgrade their storage cluster and it works as expected. Users can now bootstrap their storage clusters with custom monitoring stack images Previously, users had to adjust the image used for their monitoring stack daemons manually after bootstrapping the cluster With this release, you can specify custom images for monitoring stack daemons during bootstrap by passing a configuration file formatted as follows: Syntax You can run bootstrap with the --config CONFIGURATION_FILE_NAME option in the command. If you have other configuration options, you can simply add the lines above in your configuration file before bootstrapping the storage cluster. cephadm enables automated adjustment of osd_memory_target With this release, cephadm enables automated adjustment of osd_memory_target configuration parameter by default. Users can now specify CPU limits for the daemons by service With this release, you can customize the CPU limits for all daemons within any given service by adding the CPU limit to the service specification file via the extra_container_args field. Example cephadm now supports IPv6 networks for Ceph Object Gateway deployment With this release, cephadm supports specifying an IPv6 network for Ceph Object Gateway specifications. An example of a service configuration file for deploying Ceph Object Gateway is: Example The ceph nfs export create rgw command now supports exporting Ceph Object Gateway users Previously, the ceph nfs export create rgw command would only create Ceph Object Gateway exports at the bucket level. With this release, the command creates the Ceph Object Gateway exports at both the user and bucket level. Syntax Example 3.2. Ceph Dashboard Users can now view the HAProxy metrics on the Red Hat Ceph Storage Dashboard With this release, Red Hat introduces the new Grafana dashboard for ingress service used for Ceph Object Gateway endpoints. You can now view the four HAProxy metrics under Ceph Object Gateway Daemons Overall Performance such as Total responses by HTTP code, Total requests/responses, Total number of connections, and Current total of incoming/outgoing bytes. User can view mfa_ids on the Red Hat Ceph Storage Dashboard With this release, you can view the mfa_ids for a user configured with multi-factor authentication (MFA) for the Ceph Object Gateway user in the User Details section on the Red Hat Ceph Storage Dashboard. 3.3. Ceph Manager plugins The global recovery event in the progress module is now optimized With this release, computing for the progress of global recovery events is optimized for a large number of placement groups in a large storage cluster by using the C++ code instead of the python module thereby reducing the CPU utilization. 3.4. The Ceph Volume utility The lvm commands does not cause metadata corruption when run within the containers Previously, when the lvm commands were run directly within the containers, it would cause LVM metadata corruption. With this release, ceph-volume uses the host namespace to run the lvm commands and avoids metadata corruption. 3.5. Ceph Object Gateway Lock contention messages from the Ceph Object Gateway reshard queue are marked as informational Previously, when the Ceph Object Gateway failed to get a lock on a reshard queue, the output log entry would appear to be an error causing concern to customers. With this release, the entries in the output log appear as informational and are tagged as "INFO:". The support for OIDC JWT token validation using modulus and exponent is available With this release, OIDC JSON web token (JWT) validation supports the usage of modulus and exponent for signature calculation. It also extends the support for available methods to validate OIDC JWT validation. The role name and role session fields are now available in ops log for temporary credentials Previously, the role name and role session were not available and it was difficult for the administrator to know which role was being assumed and which session was active for the temporary credentials being used. With this release, role name and role session to ops log are available for temporary credentials, returned by AssumeRole* APIs, to perform S3 operations. Users can now use the --bucket argument to process bucket lifecycles With this release, you can provide a --bucket= BUCKET_NAME argument to the radosgw-admin lc process command to process the lifecycle for the corresponding bucket. This is convenient for debugging lifecycle problems that affect specific buckets and for backfilling lifecycle processing for specific buckets that have fallen behind. 3.6. Multi-site Ceph Object Gateway Multi-site configuration supports dynamic bucket index resharding Previously, only manual resharding of the buckets for multi-site configurations was supported. With this release, dynamic bucket resharding is supported in multi-site configurations. Once the storage clusters are upgraded, enable the resharding feature and reshard the buckets either manually with radogw-admin bucket reshard command or automatically with dynamic resharding, independently of other zones in the storage cluster. 3.7. RADOS Use the noautoscale flag to manage the PG autoscaler With this release, pg_autoscaler can be turned on or off globally using the noautoscale flag. This flag is set to off by default. When this flag is set, then all the pools have pg_autoscale_mode as off For more information, see the Manually updating autoscale profile section in the Red Hat Ceph Storage Storage Strategies Guide . Users can now create pools with the --bulk flag With this release, you can create pools with the --bulk flag. It uses a profile of the pg_autoscaler and provides better performance from the start and has a full complement of placement groups (PGs) and scales down only when the usage ratio across the pool is not even. If the pool does not have the --bulk flag, the pool starts out with minimal PGs. To create a pool with the bulk flag: Syntax To set/unset bulk flag of existing pool: Syntax To get bulk flag of existing pool: Syntax
[ "service_type: rgw placement: label: rgw count-per-host: 2", "[mgr] mgr/cephadm/container_image_grafana = GRAFANA_IMAGE_NAME mgr/cephadm/container_image_alertmanager = ALERTMANAGER_IMAGE_NAME mgr/cephadm/container_image_prometheus = PROMETHEUS_IMAGE_NAME mgr/cephadm/container_image_node_exporter = NODE_EXPORTER_IMAGE_NAME", "service_type: mon service_name: mon placement: hosts: - host01 - host02 - host03 extra_container_args: - \"--cpus=2\" service_type: osd service_id: osd_example placement: hosts: - host01 extra_container_args: - \"--cpus=2\" spec: data_devices: paths: - /dev/sdb", "service_type: rgw service_id: rgw placement: count: 3 networks: - fd00:fd00:3000::/64", "ceph nfs export create rgw --cluster-id CLUSTER_ID --pseudo-path PSEUDO_PATH --user-id USER_ID [--readonly] [--client_addr VALUE ...] [--squash VALUE ]", "ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24", "ceph osd pool create POOL_NAME --bulk", "ceph osd pool set POOL_NAME bulk TRUE/FALSE/1/0 ceph osd pool unset POOL_NAME bulk TRUE/FALSE/1/0", "ceph osd pool get POOL_NAME --bulk" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.1_release_notes/enhancements
Appendix F. Examples using the Secure Token Service APIs
Appendix F. Examples using the Secure Token Service APIs These examples are using Python's boto3 module to interface with the Ceph Object Gateway's implementation of the Secure Token Service (STS). In these examples, TESTER2 assumes a role created by TESTER1 , as to access S3 resources owned by TESTER1 based on the permission policy attached to the role. The AssumeRole example creates a role, assigns a policy to the role, then assumes a role to get temporary credentials and access to S3 resources using those temporary credentials. The AssumeRoleWithWebIdentity example authenticates users using an external application with Keycloak, an OpenID Connect identity provider, assumes a role to get temporary credentials and access S3 resources according to the permission policy of the role. AssumeRole Example import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER1\"]},\"Action\":[\"sts:AssumeRole\"]}]}" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600 ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets() AssumeRoleWithWebIdentity Example import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"Federated\":\[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\"\]\},\"Action\":\[\"sts:AssumeRoleWithWebIdentity\"\],\"Condition\":\{\"StringEquals\":\{\"localhost:8080/auth/realms/demo:app_id\":\"customer-portal\"\}\}\}\]\}" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, WebIdentityToken=<Web Token> ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets() Additional Resources See the Test S3 Access section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on using Python's boto module.
[ "import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER1\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"s3:*\\\",\\\"Resource\\\":\\\"arn:aws:s3:::*\\\"}}\" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600 ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets()", "import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"Federated\\\":\\[\\\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRoleWithWebIdentity\\\"\\],\\\"Condition\\\":\\{\\\"StringEquals\\\":\\{\\\"localhost:8080/auth/realms/demo:app_id\\\":\\\"customer-portal\\\"\\}\\}\\}\\]\\}\" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"s3:*\\\",\\\"Resource\\\":\\\"arn:aws:s3:::*\\\"}}\" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, WebIdentityToken=<Web Token> ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets()" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/examples-using-the-secure-token-service-apis_dev
Chapter 2. New Features
Chapter 2. New Features This section describes new features introduced in Red Hat OpenShift Data Foundation 4.9. User interface, product component, and documentation rebranding OpenShift Container Storage, based on the open source Ceph technology, has expanded its scope and foundational role in a containerized, hybrid cloud environment since its introduction. To better reflect these foundational and infrastructure distinctives, OpenShift Container Storage is now OpenShift Data Foundation . OpenShift Data Foundation 4.9 now includes: Improved dashboards for viewing all storage system status and metrics Easy to use wizard for creating storage system Rebranded user interface and documentation To view documentation for OpenShift Container Storage version 4.8 and earlier, see Product Documentation for Red Hat OpenShift Container Storage . To update from OpenShift Container Storage 4.8 to OpenShift Data Foundation 4.9, you must freshly install the OpenShift Data Foundation operator from the OpenShift Container Platform Operator Hub. This fresh operator installation upgrades OpenShift Container Storage version 4.8 and all its components to OpenShift Data Foundation version 4.9. For more information, see Upgrading to OpenShift Data Foundation . Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solutions. For more information, see Multicloud Object Gateway bucket replication . Ability to view pool compression metrics In this release, you can view the pool compression metrics, which provide information about the amount of storage space saved, the effectiveness of pool compression when it is enabled, and its impact on capacity consumption. The per-pool metrics available with this release provide information that enables you to reduce cost and consume data more efficiently. Also, you can disable compression if it is not effective. For more information, see Pool metrics . Automated scaling of Multicloud Object Gateway endpoint pods You can use the automated scaling of Multicloud Object Gateway endpoint pods feature to automate the resource adjustments based on increases or decreases to the load. This provides better performance and serviceability to manage your production resources for your S3 load. For more information, see Automatic scaling of Multicloud Object Gateway endpoints . Deployment and monitoring layer for pluggable external storage (IBM FlashSystem(R)) In this release, you can connect to and monitor IBM FlashSystem(R) storage using OpenShift Data Foundation. OpenShift Data Foundation extends IBM FlashSystem to file and object storage while providing a single view for both the underlying storage and OpenShift Data Foundation data layer. For more information, see Deploy OpenShift Data Foundation using IBM FlashSystem .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/new_features
Chapter 6. Revoking certificates and issuing CRLs
Chapter 6. Revoking certificates and issuing CRLs The Certificate System provides methods for revoking certificates and for producing lists of revoked certificates, called certificate revocation lists (CRLs). This chapter describes the methods for revoking a certificate, describes CMC revocation, and provides details about CRLs and setting up CRLs. 6.1. About revoking certificates Server and client applications that use public-key certificates as ID tokens need access to information about the validity of a certificate. Because one of the factors that determines the validity of a certificate is its revocation status, these applications need to know whether the certificate being validated has been revoked. The CA has a responsibility to do the following: Revoke the certificate if a revocation request is received by the CA and approved. Make the revoked certificate status available to parties or applications that need to verify its validity status. Certificates can be revoked by an end user (the original owner of the certificate) or by a Certificate Manager agent. An end user can revoke only certificates that contain the same subject name as the certificate presented for authentication. Whenever a revocation request is approved, the Certificate Manager automatically updates the status of the certificate in its internal database: it marks the copy of the certificate in its internal database as revoked and, if configured to do so, removes the revoked certificate from the publishing directory. These changes are reflected in the CRL issued by the CA. 6.1.1. Certificate Revocation List (CRL) One of the standard methods for conveying the revocation status of certificates is by publishing a list of revoked certificates, known as certificate revocation list (CRL). A CRL is a publicly available list of certificates that have been revoked. The Certificate Manager can be configured to generate CRLs. These CRLs can be created to conform to X.509 standards by enabling extension-specific modules in the CRL configuration. The server supports standard CRL extensions through its CRL issuing points framework; see Section 6.3.3, "Setting CRL extensions" for more information on setting up CRL extensions for issuing points. The Certificate Manager can generate a CRL every time a certificate is revoked and at periodic intervals. If publishing is set up, the CRLs can be published to a file, an LDAP directory, or an OCSP responder. A CRL is issued and digitally signed by the CA that issued the certificates listed in the CRL or by an entity that has been authorized by that CA to issue CRLs. The CA may use a single key pair to sign both the certificates and CRLs it issues or two separate key pairs, one for signing certificates and another one for signing CRLs. By default, the Certificate Manager uses a single key pair for signing the certificates it issues and CRLs it generates. To create another key pair for the Certificate Manager and use it exclusively for signing CRLs, see Section 6.3.4, "Setting a CA to use a different certificate to sign CRLs" . CRLs are generated when issuing points are defined and configured and when CRL generation is enabled. When CRLs are enabled, the server collects revocation information as certificates are revoked. The server attempts to match the revoked certificate against all issuing points that are set up. A given certificate can match none of the issuing points, one of the issuing points, several of the issuing points, or all of the issuing points. When a certificate that has been revoked matches an issuing point, the server stores the information about the certificate in the cache for that issuing point. The cache is copied to the internal directory at the intervals set for copying the cache. When the interval for creating a CRL is reached, a CRL is created from the cache. If a delta CRL has been set up for this issuing point, a delta CRL is also created at this time. The full CRL contains all revoked certificate information since the Certificate Manager began collecting this information. The delta CRL contains all revoked certificate information since the last update of the full CRL. The full CRLs are numbered sequentially, as are delta CRLs. A full CRL and a delta CRL can have the same number; in that case, the delta CRL has the same number as the full CRL. For example, if the full CRL is the first CRL, it is CRL 1. The delta CRL is Delta CRL 2. The data combined in CRL 1 and Delta CRL 2 is equivalent to the full CRL, which is CRL 2. NOTE When changes are made to the extensions for an issuing point, no delta CRL is created with the full CRL for that issuing point. A delta CRL is created with the second full CRL that is created, and then all subsequent full CRLs. The internal database stores only the latest CRL and delta CRL. As each new CRL is created, the old one is overwritten. When CRLs are published, each update to the CRL and delta CRL is published to the locations specified in the publishing set up. The method of publishing determines how many CRLs are stored. For file publishing, each CRL that is published to a file using the number for the CRL, so no file is overwritten. For LDAP publishing, each CRL that is published replaces the old CRL in the attribute containing the CRL in the directory entry. By default, CRLs do not contain information about revoked expired certificates. The server can include revoked expired certificates by enabling that option for the issuing point. If expired certificates are included, information about revoked certificates is not removed from the CRL when the certificate expires. If expired certificates are not included, information about revoked certificates is removed from the CRL when the certificate expires. 6.1.2. User-initiated revocation When an end user submits a certificate revocation request, the first step in the revocation process is for the Certificate Manager to identify and authenticate the end user to verify that the user is attempting to revoke his own certificate, not a certificate belonging to someone else. In SSL/TLS client authentication, the server expects the end user to present a certificate that has the same subject name as the one to be revoked and uses that for authentication purposes. The server verifies the authenticity of a revocation request by mapping the subject name in the certificate presented for client authentication to certificates in its internal database. The server revokes the certificate only if the certificate maps successfully to one or more valid or expired certificates in its internal database. After successful authentication, the server lists the valid or expired certificates that match the subject name of the certificate presented for client authentication. The user can then either select the certificates to be revoked or revoke all certificates in the list. 6.1.3. Reasons for revoking a certificate A Certificate Manager can revoke any certificate it has issued. There are generally accepted reason codes for revoking a certificate that are often included in the CRL, such as the following: 0 . Unspecified; no particular reason is given. 1 . The private key associated with the certificate was compromised. 2 . The private key associated with the CA that issued the certificate was compromised. 3 . The owner of the certificate is no longer affiliated with the issuer of the certificate and either no longer has rights to the access gained with the certificate or no longer needs it. 4 . Another certificate replaces this one. 5 . The CA that issued the certificate has ceased to operate. 6 . The certificate is on hold pending further action. It is treated as revoked but may be taken off hold in the future so that the certificate is active and valid again. 8 . The certificate is going to be removed from the CRL because it was removed from hold. This only occurs in delta CRLs. 9 . The certificate is revoked because the privilege of the owner of the certificate has been withdrawn. 6.1.4. CRL issuing points Because CRLs can grow very large, there are several methods to minimize the overhead of retrieving and delivering large CRLs. One of these methods partitions the entire certificate space and associates a separate CRL with every partition. This partition is called a CRL issuing point , the location where a subset of all the revoked certificates is maintained. Partitioning can be based on whether the revoked certificate is a CA certificate, whether it was revoked for a specific reason, or whether it was issued using a specific profile. Each issuing point is identified by its name. By default, the Certificate Manager generates and publishes a single CRL, the master CRL . An issuing point can generate CRLs for all certificates, for only CA signing certificates, or for all certificates including expired certificates. Once the issuing points have been defined, they can be included in certificates so that an application that needs to check the revocation status of a certificate can access the CRL issuing points specified in the certificate instead of the master or main CRL. Since the CRL maintained at the issuing point is smaller than the master CRL, checking the revocation status is much faster. CRL distribution points can be associated with certificates by setting the CRLDistributionPoint extension. 6.1.5. Delta CRLs Delta CRLs can be issued for any defined issuing point. A delta CRL contains information about any certificates revoked since the last update to the full CRL. Delta CRLs for an issuing point are created by enabling the DeltaCRLIndicator extension. 6.1.6. Publishing CRLs The Certificate Manager can publish the CRL to a file, an LDAP-compliant directory, or to an OCSP responder. Where and how frequently CRLs are published are configured in the Certificate Manager, as described in Chapter 7, Publishing certificates and CRLs . Because CRLs can be very large, publishing CRLs can take a very long time, and it is possible for the process to be interrupted. Special publishers can be configured to publish CRLs to a file over HTTP1.1, and, if the process is interrupted, the CA subsystem's web server can resume publishing at the point it was interrupted, instead of having to begin again. This is described in Section 7.7, "Setting up resumable CRL downloads" . 6.1.7. Certificate revocation pages The end-entities page of the Certificate Manager includes default HTML forms for revocation authenticated by an SSL/TLS client. The forms are accessible from the Revocation tab. You can see the form for such a revocation by clicking the User Certificate link. To change the form appearance to suit organization's requirements, edit the UserRevocation.html , the form that allows the SSL/TLS client authenticated revocation of client or personal certificates. The file is in the /var/lib/instance_name/webapps/subsystem_type/ee/subsystem_type directory. 6.2. Revoking Certificates 6.2.1. Performing a CMC revocation Similar to Certificate Management over CMS (CMC) enrollment, CMC revocation enables users to set up a revocation client, and sign the revocation request with either an agent certificate or a user certificate with a matching subjectDN attribute. Then the user can send the signed request to the Certificate Manager. Alternatively, CMC revocation can also be authenticated using the Shared Secret Token mechanism. For details, see Adding a CMC Shared Secret to a certificate for certificate revocations . Regardless of whether a user or agent signs the request or if a Shared Secret Token is used, the Certificate Manager automatically revokes the certificate when it receives a valid revocation request. Certificate System provides the following utilities for CMC revocation requests: CMCRequest . For details, see Section 6.2.1.1, "Revoking a certificate using CMCRequest " . CMCRevoke . For details, see Section 6.2.1.2, "Revoking a certificate using CMCRevoke " . Important Red Hat recommends using the CMCRequest utility to generate CMC revocation requests, because it provides more options than CMCRevoke . 6.2.1.1. Revoking a certificate using CMCRequest To revoke a certificate using CMCRequest : Create a configuration file for the CMC revocation request, such as /home/user_name/cmc-request.cfg , with the following content: Create the CMC request: If the command succeeds, the CMCRequest utility stores the CMC request in the file specified in the output parameter in the request configuration file. Create a configuration file, such as /home/user_name/cmc-submit.cfg , to use in a later step to submit the CMC revocation request to the CA. Add the following content to the created file: Important If the CMC revocation request is signed, set the secure and clientmode parameters to true and, additionally, fill the nickname parameter. Depending on who signed the request, the servlet parameter in the configuration file for HttpClient must be set accordingly: If an agent signed the request, set: RSA: servlet=/ca/ee/ca/profileSubmitCMCFull?profileId=caFullCMCUserCert ECC: servlet=/ca/ee/ca/profileSubmitCMCFull?profileId=caECFullCMCUserCert If a user signed the request, set: RSA: servlet=/ca/ee/ca/profileSubmitCMCFull?profileId=caFullCMCSharedTokenCert ECC: servlet=/ca/ee/ca/profileSubmitCMCFull?profileId=caECFullCMCSharedTokenCert Submit the CMC request: For further details about revoking a certificate using CMCRequest , see the CMCRequest(1) man page. 6.2.1.2. Revoking a certificate using CMCRevoke The CMC revocation utility, CMCRevoke , is used to sign a revocation request with an agent's certificate. This utility simply passes the required information - certificate serial number, issuer name, and revocation reason - to identify the certificate to revoke, and then the require information to identify the CA agent performing the revocation (certificate nickname and the database with the certificate). Note For more information on enabling CMCRevoke , see 9.6.4 "Enabling CMCRevoke for the Web User Interface" section in the Planning, Installation and Deployment Guide (Common Criteria Edition) . The reason the certificate is being revoked can be any of the following (with the number being the value passed to the CMCRevoke utility): 0 - unspecified 1 - the key was compromised 2 - the CA key was compromised 3 - the employee's affiliation changed 4 - the certificate has been superseded 5 - cessation of operation 6 - the certificate is on hold Testing CMCRevoke Create a CMC revocation request for an existing certificate. IMPORTANT The correct syntax does not have a space between the argument and its value. For example, giving a serial number of 26 is -s26 , not -s 26 . If certain values include spaces, surround these values in quotation marks. For example, -c"test comment" . For example, if the directory containing the agent certificate is ~jsmith/.mozilla/firefox/ , the nickname of the certificate is AgentCert , and the serial number of the certificate is 22 , the command is as shown: Open the end-entities page. Select the Revocation tab. Select the CMC Revoke link on the menu. Paste the output from the CMCRevoke into the text area. Remove -----BEGIN NEW CERTIFICATE REQUEST----- and ----END NEW CERTIFICATE REQUEST----- from the pasted content. Click Submit . The returned page should confirm that the correct certificate has been revoked. 6.2.2. Performing revocation as an agent from the Web UI A Certificate Manager agent can use the agent services page to find a specific certificate issued by the Certificate System or to retrieve a list of certificates that match specified criteria. The certificates which are retrieved can be examined or revoked by the agent. The Certificate Manager agent can also manage the certificate revocation list (CRL). 6.2.2.1. Listing certificates It is possible to list certificates within a range of serial numbers. All certificates within the range may be displayed or, if the agent selects, only those that are currently valid. To find a specific certificate or to list certificates by serial number: Open the Certificate Manager agent services page. Click List Certificates . To find a certificate with a specific serial number, enter the serial number in both the upper limit and lower limit fields of the List Certificates form, in either decimal or hexadecimal form. Use 0x to indicate the beginning of a hexadecimal number; for example, 0x00000006 . Serial numbers are displayed in hexadecimal form in the Search Results and Details pages. To find all certificates within a range of serial numbers, enter the upper and lower limits of the serial number range in decimal or hexadecimal form. Leaving either the lower limit or upper limit field blank displays the certificate with the specified number, plus all certificates before or after it in sequence. To limit the returned list to valid certificates, select the check boxes labeled with filtering methods. It is possible to include revoked certificates, to include expired certificates or certificates that are not yet valid, or to display only valid certificates. Enter the maximum number of certificates matching the criteria that should be returned in the results page. When any number is entered, the first certificates up to that number matching the criteria are displayed. Click Find . The Certificate System displays a list of the certificates that match the search criteria. Select a certificate in the list to examine it in more detail or perform various operations on it. For more information, refer to Section 6.2.2.3, "Examining certificate details" . 6.2.2.2. Searching for certificates (advanced) Search for certificates by more complex criteria than serial number using the advanced search form. To perform an advanced search for certificates: Open the Certificate Manager agent services page. The agent must submit the proper client certificate to access this page. Click Search for Certificates to display the Search for Certificates form to specify search criteria. To search by particular criteria, use one or more of the sections of the Search for Certificates form. To use a section, select the check box, then fill in any necessary information. Serial Number Range. Finds a certificate with a specific serial number or lists all certificates within a range of serial numbers. To find a certificate with a specific serial number, enter the serial number in both the upper limit and lower limit fields in either decimal or hexadecimal. Use 0x to indicate the beginning of a hexadecimal number, such as 0x2A . Serial numbers are displayed in hexadecimal form in the Search Results and Details pages. To find all certificates within a range of serial numbers, enter the upper and lower limits of the serial number range in decimal or hexadecimal. Leaving either the lower limit or upper limit field blank returns all certificates before or after the number specified. Status. Selects certificates by their status. A certificate has one of the following status codes: Valid. A valid certificate has been issued, its validity period has begun but not ended, and it has not been revoked. Invalid. An invalid certificate has been issued, but its validity period has not yet begun. Revoked. The certificate has been revoked. Expired. An expired certificate has passed the end of its validity period. Revoked and Expired. The certificate has passed its validity period and been revoked. Subject Name. Lists certificates belonging to a particular owner; it is possible to use wildcards in this field. NOTE Certificate System certificate request forms support all UTF-8 characters for the common name, organizational unit, and requester name fields. The common name and organization unit fields are included in the subject name of the certificate. This means that the searches for subject names support UTF-8 characters. This support does not include supporting internationalized domain names. Revocation Information. Lists certificates that have been revoked during a particular period, by a particular agent, or for a particular reason. For example, an agent can list all certificates revoked between July 2005 and April 2006 or all certificates revoked by the agent with the username admin . To list certificates revoked within a time period, select the day, month, and year from the drop-down lists to identify the beginning and end of the period. To list certificates revoked by a particular agent, enter the name of the agent; it is possible to use wildcards in this field. To list certificates revoked for a specific reason, select the revocation reasons from the list. Issuing Information. Lists certificates that have been issued during a particular period or by a particular agent. For example, an agent can list all certificates issued between July 2005 and April 2006 or all certificates issued by the agent with the username jsmith . To list certificates issued within a time period, select the day, month, and year from the drop-down lists to identify the beginning and end of the period. To list certificates issued by a particular agent, enter the name of the agent; it is possible to use wildcards in this field. To list certificates enrolled through a specific profile, enter the name of the profile. Dates of Validity. List certificates that become effective or expire during a particular period. For example, an agent can list all certificates that became valid on June 1, 2003, or that expired between January 1, 2006, and June 1, 2006. It is also possible to list certificates that have a validity period of a certain length of time, such as all certificates that are valid for less than one month. To list certificates that become effective or expire within a time period, select the day, month, and year from the drop-down lists to identify the beginning and end of the period. To list certificates that have a validity period of a certain length in time, select Not greater than or Not less than from the drop-down list, enter a number, and select a time unit from the drop-down list: days, weeks, months, or years. Basic Constraints. Shows CA certificates that are based on the Basic Constraints extension. Type. Lists certain types of certificates, such as all certificates for subordinate CAs. This search works only for certificates containing the Netscape Certificate Type extension, which stores type information. For each type, choose from the drop-down list to find certificates where that type is On , Off or Do Not Care . To find a certificate with a specific subject name, use the Subject Name section. Select the check box, then enter the subject name criteria. Enter values for the included search criteria and leave the others blank. The standard tags or components are as follows: Email address. Narrows the search by email address. Common name. Finds certificates associated with a specific person or server. UserID. Searches certificates by the user ID for the person to whom the certificate belongs. Organization unit. Narrows the search to a specific division, department, or unit within an organization. Organization. Narrows the search by organization. Locality. Narrows the search by locality, such as the city. State. Narrows the search by state or province. Country. Narrows the search by country; use the two-letter country code, such as US . NOTE Certificate System certificate request forms support all UTF-8 characters for the common name and organizational unit fields. The common name and organization unit fields are included in the subject name of the certificate. This means that the searches for subject names or those elements in the subject name support UTF-8 characters. This support does not include supporting internationalized domain names, such as in email addresses. After entering the field values for the server to match, specify the type of search to perform: Exact searches for certificate subject names match the exact components specified and contain none of the components left blank. Wildcards cannot be used in this type of search. Partial searches for certificate subject names match the specified components, but the returned certificates may also contain values in components that were left blank. Wildcard patterns can be used in this type of search by using a question mark ( ? ) to match an arbitrary single character and an asterisk ( \ *) to match an arbitrary string of characters. NOTE Placing a single asterisk in a search field means that the component must be in the certificate's subject name but may have any value. Leave the field blank if it does not matter if the field is present. After entering the search criteria, scroll to the bottom of the form, and enter the number of certificates matching the specified criteria that should be returned. Setting the number of certificates to be returned returns the first certificates found that match the search criteria up to that number. It is also possible to put a time limit on the search in seconds. Click Find . The Search Results form appears, showing a list of the certificates that match the search criteria. Select a certificate in the list to examine it in more detail. For more information, refer to Section 6.2.2.3, "Examining certificate details" . 6.2.2.3. Examining certificate details On the agent services page, click List Certificates or Search for Certificates , specify search criteria, and click Find to display a list of certificates. On the Search Results form, select a certificate to examine. If the desired certificate is not shown, scroll to the bottom of the list, specify an additional number of certificates to be returned, and click Find . The system displays the certificates up to that number that match the original search criteria. After selecting a certificate, click the Details button at the left side of its entry. The Certificate page shows the detailed contents of the selected certificate and instructions for installing the certificate in a server or in a web browser. Figure 6.1. Certificate details The certificate is shown in base-64 encoded form at the bottom of the Certificate page, under the heading Installing this certificate in a server . 6.2.2.4. Revoking certificates Only Certificate Manager agents can revoke certificates other than their own. A certificate must be revoked if one of the following situations occurs: The owner of the certificate has changed status and no longer has the right to use the certificate. The private key of a certificate owner has been compromised. To revoke one or more certificates, search for the certificates to revoke using the Revoke Certificates button. While the search is similar to the one through the Search for Certificates form, the Search Results form returned by this search offers the option of revoking one or all of the returned certificates. 6.2.2.4.1. Revoking certificates Open the Certificate Manager agent services page. Click Revoke Certificates . NOTE The search form that appears has the same search criteria sections as the Search for Certificates form. Specify the search criteria by selecting the check boxes for the sections and filling in the required information. Scroll to the bottom of the form, and set the number of matching certificates to display. Click Find . The search returns a list of matching certificates. It is possible to revoke one or all certificates in the list. TIP If the search criteria are very specific and all of the certificates returned are to be revoked, then click the Revoke ALL # Certificates button at the bottom of the page. The number shown on the button is the total number of certificates returned by the search. This is usually a larger number than the number of certificates displayed on the current page. Verify that all of the certificates returned by the search should be revoked, not only those displayed on the current page. Click the Revoke button to the certificate to be revoked. CAUTION Whether revoking a single certificate or a list of certificates, be extremely careful that the correct certificate has been selected or that the list contains only certificates which should be revoked. Once a revocation operation has been confirmed, there is no way to undo it. Select an invalidity date. The invalidity date is the date which it is known or suspected that the user's private key was compromised or that the certificate became invalid. A set of drop down lists allows the agent to select the correct invalidity date. Select a reason for the revocation. Key compromised CA key compromised Affiliation changed Certificate superseded Cessation of operation Certificate is on hold Enter any additional comment. The comment is included in the revocation request. When the revocation request is submitted, it is automatically approved, and the certificate is revoked. Revocation requests are viewed by listing requests with a status of Completed . 6.2.2.4.2. Taking certificates off hold There can be instances when a certificate is inaccessible, and therefore should be treated as revoked, but that certificate can be recovered. For example, a user may have a personal email certificate stored on a flash drive which he accidentally leaves at home. The certificate is not compromised, but it should be temporarily suspended. That certificate can be temporarily revoked by putting it on hold (one of the options given when revoking a certificate, as in Section 6.2, "Revoking Certificates" ). At a later time - such as when the forgotten flash drive is picked up - that certificate can be taken off hold and is again active. Search for the on hold certificate, as in Section 6.2.2.2, "Searching for certificates (advanced)" . Scroll to the Revocation Information section, and set the Certificate is on hold revocation reason as the search criterion. In the results list, click the Off Hold button by the certificate to take off hold. 6.2.2.5. Managing the certificate revocation list Revoking a certificate notifies other users that the certificate is no longer valid. This notification is done by publishing a list of the revoked certificates, called the certificate revocation list (CRL), to an LDAP directory or to a flat file. This list is publicly available and ensures that revoked certificates are not misused. 6.2.2.5.1. Viewing or examining CRLs It may be necessary to view or examine a CRL, such as before manually updating a directory with the latest CRL. To view or display the CRL: Go to the Certificate Manager agent services page. Click Display Certificate Revocation List to display the form for viewing the CRL. Select the CRL to view. If the administrator has created multiple issuing points, these are listed in the Issuing point drop-down list. Otherwise, only the master CRL is shown. Choose how to display the CRL by selecting one of the options from the Display Type menu. The choices on this menu are as follows: Cached CRL. Views the CRL from the cache rather than from the CRL itself. This option displays results faster than viewing the entire CRL. Entire CRL. Retrieves and displays the entire CRL. CRL header. Retrieves and displays the CRL header only. Base 64 Encoded. Retrieves and displays the CRL in base-64 encoded format. Delta CRL. Retrieves and displays a delta CRL, which is a subset of the CRL showing only new revocations since the last CRL was published. This option is available only if delta CRL generation is enabled. To examine the selected CRL, click Display . The CRL appears in the browser window. This allows the agent to check whether a particular certificate (by its serial number) appears in the list and to note recent changes such as the total number of certificates revoked since the last update, the total number of certificates taken off hold since the last update, and the total number of certificates that expired since the last update. 6.2.2.5.2. Updating the CRL CRLs can be automatically updated if a schedule for automatic CRL generation is enabled, and the schedule can set the CRL to be generated at set time schedules or whenever there are certificate revocations. Likewise, CRLs can be also automatically published if CRL publishing is enabled. In some cases, the CRL may need to be updated manually, such as updating the list after the system has been down or removing expired certificates to reduce the file size. (Expired certificates do not need to be included in the CRL because they are already invalid because of the expiration date.) Only a Certificate Manager agent can manually update the CRL. To update the CRL manually: Open the Certificate Manager agent services page. Click Update Revocation List to display the form for updating the CRL. Figure 6.2. Update certificate revocation list Select the CRL issuing point which will update the CRL. There can be multiple issuing points configured for a single CA. Select the algorithm to use to sign the new CRL. Before choosing an algorithm, make sure that any system or network applications that need to read or view this CRL support the algorithm. SHA-256 with RSA. SHA-384 with RSA. SHA-512 with RSA. Note SHA1 is no longer supported. Before selecting an algorithm, make sure that the Certificate System has that algorithm enabled. The Certificate System administrator will have that information. Click Update to update the CRL with the latest certificate revocation information. 6.2.3. Performing revocation on an own certificate as a user using the Web UI Revoking a certificate invalidates it before its expiration date. This can be necessary if a certificate is lost, compromised, or no longer needed. 6.2.3.1. Revoking your user certificate Click the Revocation tab. Click the User Certificate link. Select the reason why the certificate is being revoked, and click Submit . Select the certificates to revoke from the list. 6.2.3.2. Checking whether a certificate is revoked Click the Retrieval tab. Click the Import Certificate Revocation List link. Select the radio button by Check whether the following certificate is included in CRL cache or Check whether the following certificate is listed by CRL , and enter the serial number of the certificate. Click the Submit button. A message is returned either saying that the certificate is not listed in any CRL or giving the information for the CRL which contains the certificate. 6.2.3.3. Downloading and importing CRLs Certificate revocation lists (CRLs) can be downloaded and installed in a web client, application, or machine. They can also be viewed to see what certificates have been revoked. Click the Retrieval tab. Click the Import Certificate Revocation List link. Select the radio button to view, download, or import the CRL. To import the CRL into the browser or download and save it, select the appropriate radio button. There are two options: to download/import the full CRL or the delta CRL. The delta CRL only imports/downloads the list of certificates which have been revoked since the last time the CRL was generated. To view the CRL, select Display the CRL information and select which CRL subset (called an issuing point ) to view. This shows the CRL information, including the number of certificates included in it. Click the Submit button. Save the file or approve the import operation. 6.3. Issuing CRLs Important The tabs for configuring CRL Issuing points on pkiconsole are broken and do not respond at "submit". Due to the deprecation of pkiconsole , please edit the CA's CS.cfg directly at installation time, or use the alternative pki cli method to configure the parameters. E.g.: For the relevant configuration parameter name/value pairs, see section 7.3.8. Configure support for CRL Distribution Point in the Planning, Installation, and Deployment Guide (Common Criteria Edition). The Certificate Manager uses its CA signing certificate key to sign CRLs. To use a separate signing key pair for CRLs, set up a CRL signing key and change the Certificate Manager configuration to use this key to sign CRLs. See Section 6.3.4, "Setting a CA to use a different certificate to sign CRLs" for more information. Set up CRL issuing points. An issuing point is already set up and enabled for a master CRL. Figure 6.3. Default CRL issuing point Additional issuing points for the CRLs can be created. See Section 6.3.1, "Configuring issuing points" for details. There are five types of CRLs the issuing points can create, depending on the options set when configuring the issuing point to define what the CRL will list: Master CRL contains the list of revoked certificates from the entire CA. ARL is an Authority Revocation List containing only revoked CA certificates. CRL with expired certificates includes revoked certificates that have expired in the CRL. CRL from certificate profiles determines the revoked certificates to include based on the profiles used to create the certificates originally. CRLs by reason code determines the revoked certificates to include based on the revocation reason code. Configure the CRLs for each issuing point. See Section 6.3.2, "Configuring CRLs for each issuing point" for details. Set up the CRL extensions which are configured for the issuing point. See Section 6.3.3, "Setting CRL extensions" for details. Set up the delta CRL for an issuing point by enabling extensions for that issuing point, DeltaCRLIndicator or CRLNumber . Set up the CRLDistributionPoint extension to include information about the issuing point. Set up publishing CRLs to files, an LDAP directory, or an OCSP responder. See Chapter 7, Publishing certificates and CRLs for details about setting up publishing. 6.3.1. Configuring issuing points Issuing points define which certificates are included in a new CRL. A master CRL issuing point is created by default for a master CRL containing a list of all revoked certificates for the Certificate Manager. To create a new issuing point, do the following: Open the Certificate System Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, expand Certificate Manager from the left navigation menu. Then select CRL Issuing Points . To edit an issuing point, select the issuing point, and click Edit . The only parameters which can be edited are the name of the issuing point and whether the issuing point is enabled or disabled. To add an issuing point, click Add . The CRL Issuing Point Editor window opens. Figure 6.4. CRL issuing point editor NOTE If some fields do not appear large enough to read the content, expand the window by dragging one of the corners. Fill in the following fields: Enable . Enables the issuing point if selected; deselect to disable. CRL Issuing Point name . Gives the name for the issuing point; spaces are not allowed. Description . Describes the issuing point. Click OK . To view and configure a new issuing point, close the CA Console, then open the Console again. The new issuing point is listed below the CRL Issuing Points entry in the navigation tree. Configure CRLs for the new issuing point, and set up any CRL extensions that will be used with the CRL. See Section 6.3.2, "Configuring CRLs for each issuing point" for details on configuring an issuing point. See Section 6.3.3, "Setting CRL extensions" for details on setting up the CRL extensions. All the CRLs created appear on the Update Revocation List page of the agent services pages. 6.3.2. Configuring CRLs for each issuing point Information, such as the generation interval, the CRL version, CRL extensions, and the signing algorithm, can all be configured for the CRLs for the issuing point. The CRLs must be configured for each issuing point. Open the CA console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the navigation tree, select Certificate Manager , and then select CRL Issuing Points . Select the issuing point name below the Issuing Points entry. Configure how and how often the CRLs are updated by supplying information in the Update tab for the issuing point. This tab has two sections, Update Schema and Update Frequency . The Update Schema section has the following options: Enable CRL generation . This checkbox sets whether CRLs are generated for that issuing point. Generate full CRL every # delta(s) . This field sets how frequently CRLs are created in relation to the number of changes. Extend update time in full CRLs . This provides an option to set the nextUpdate field in the generated CRLs. The nextUpdate parameter shows the date when the CRL is issued, regardless of whether it is a full or delta CRL. When using a combination of full and delta CRLs, enabling Extend update time in full CRLs will make the nextUpdate parameter in a full CRL show when the full CRL will be issued. Otherwise, the nextUpdate parameter in the full CRL will show when the delta CRL will be issued, since the delta will be the CRL to be issued. The Update Frequency section sets the different intervals when the CRLs are generated and issued to the directory. Every time a certificate is revoked or released from hold . This sets the Certificate Manager to generate the CRL every time it revokes a certificate. The Certificate Manager attempts to issue the CRL to the configured directory whenever it is generated. Generating a CRL can be time consuming if the CRL is large. Configuring the Certificate Manager to generate CRLs every time a certificate is revoked may engage the server for a considerable amount of time; during this time, the server will not be able to update the directory with any changes it receives. This setting is not recommended for a standard installation. This option should be selected to test revocation immediately, such as testing whether the server issues the CRL to a flat file. Update the CRL at . This field sets a daily time when the CRL should be updated. To specify multiple times, enter a comma-separate list of times, such as 01:50,04:55,06:55 . To enter a schedule for multiple days, enter a comma-separated list to set the times within the same day, and then a semicolon separated list to identify times for different days. For example, this sets revocation on Day 1 of the cycle at 1:50am, 4:55am, and 6:55am and then Day 2 at 2am, 5am, and 5pm: Update the CRL every . This checkbox enables generating CRLs at the interval set in the field. For example, to issue CRLs every day, select the checkbox, and enter 1440 in this field. update grace period . If the Certificate Manager updates the CRL at a specific frequency, the server can be configured to have a grace period to the update time to allow time to create the CRL and issue it. For example, if the server is configured to update the CRL every 20 minutes with a grace period of 2 minutes, and if the CRL is updated at 16:00, the CRL is updated again at 16:18. update as this update extension . This field ( ca.crl.MasterCRL.nextAsThisUpdateExtension in CS.cfg ) configures the time interval for the Full CRL update. It extends the update interval from the current update time. if update as this update extension is less than the update grace period, then the update stays on course. However, if it is greater, the update time is set to ("extension" time + the grace period ). For example, if the CRL is set to update every 3 minutes with a 60-minute update as this update extension and a grace period of 1 minute, and the current update is at 5:30 AM, the update will be scheduled for 6:31 AM (60 minutes after the current update). IMPORTANT Due to a known issue, when currently setting full and delta Certificate Revocation List schedules, the Update CRL every time a certificate is revoked or released from hold option also requires you to fill out the two grace period settings. Thus, in order to select this option you need to first select the Update CRL every option and enter a number for the update grace period # minutes box. The Cache tab sets whether caching is enabled and the cache frequency. Figure 6.5. CRL cache tab Enable CRL cache . This checkbox enables the cache, which is used to create delta CRLs. If the cache is disabled, delta CRLs will not be created. For more information about the cache, see Section 6.1, "About revoking certificates" . Update cache every . This field sets how frequently the cache is written to the internal database. Set to 0 to have the cache written to the database every time a certificate is revoked. Enable cache recovery . This checkbox allows the cache to be restored. Enable CRL cache testing . This checkbox enables CRL performance testing for specific CRL issuing points. CRLs generated with this option should not be used in deployed CAs, as CRLs issued for testing purposed contain data generated solely for the purpose of performance testing. The Format tab sets the formatting and contents of the CRLs that are created. There are two sections, CRL Format and CRL Contents . Figure 6.6. CRL format tab The CRL Format section has two options: Revocation list signing algorithm is a drop down list of allowed ciphers to encrypt the CRL. Note SHA1 is no longer supported. Allow extensions for CRL v2 is a checkbox which enabled CRL v2 extensions for the issuing point. If this is enabled, set the required CRL extensions described in Section 6.3.3, "Setting CRL extensions" . Note Extensions must be turned on to create delta CRLs. The CRL Contents section has four checkboxes which set what types of certificates to include in the CRL: Include expired certificates . This includes revoked certificates that have expired. If this is enabled, information about revoked certificates remains in the CRL after the certificate expires. If this is not enabled, information about revoked certificates is removed when the certificate expires. Include revoked certificates one extra time after their expiration. This includes the listing of revoked certificates one time after their expiration, ensuring revoked certificate information stays in the CRL until the update, providing additional time for systems to recognize and manage their status, even after expiration. CA certificates only . This includes only CA certificates in the CRL. Selecting this option creates an Authority Revocation List (ARL), which lists only revoked CA certificates. Certificates issued according to profiles . This only includes certificates that were issued according to the listed profiles; to specify multiple profiles, enter a comma-separated list. Click Save . Extensions are allowed for this issuing point and can be configured. See Section 6.3.3, "Setting CRL extensions" for details. 6.3.3. Setting CRL extensions NOTE Extensions only need configured for an issuing point if the Allow extensions for CRLs v2 checkbox is selected for that issuing point. When the issuing point is created, three extensions are automatically enabled: CRLReason , InvalidityDate , and CRLNumber . Other extensions are available but are disabled by default. These can be enabled and modified. For more information about the available CRL extensions, see Section B.4.2, "Standard X.509 v3 CRL extensions reference" . To configure CRL extensions, do the following: Open the CA console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the navigation tree, select Certificate Manager , and then select CRL Issuing Points . Select the issuing point name below the Issuing Points entry, and select the CRL Extension entry below the issuing point. The right pane shows the CRL Extensions Management tab, which lists configured extensions. Figure 6.7. CRL extensions To modify a rule, select it, and click Edit/View . Most extensions have two options, enabling them and setting whether they are critical. Some require more information. Supply all required values. See Section B.4.2, "Standard X.509 v3 CRL extensions reference" for complete information about each extension and the parameters for those extensions. Click OK . Click Refresh to see the updated status of all the rules. 6.3.4. Setting a CA to use a different certificate to sign CRLs For instruction on how to configure this feature by editing the CS.cfg file, see 9.2.3.10 Setting a CA to Use a Different Certificate to Sign CRLs in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 6.3.5. Generating CRLs from cache By default, CRLs are generated from the CA's internal database. However, revocation information can be collected as the certificates are revoked and kept in memory. This revocation information can then be used to update CRLs from memory. Bypassing the database searches that are required to generate the CRL from the internal database significantly improves performance. NOTE Because of the performance enhancement from generating CRLs from cache, enable the enableCRLCache parameter in most environments. However, the Enable CRL cache testing parameter should not be enabled in a production environment. Configuring CRL generation from cache in the console Open the console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, expand the Certificate Manager folder and the CRL Issuing Points subfolder. Select the MasterCRL node. Select Enable CRL cache . Save the changes. Configuring CRL generation from cache in CS.cfg For instruction on how to configure this feature by editing the CS.cfg file, see 9.2.3.11 Configuring CRL Generation from Cache in CS.cfg in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 6.4. Setting full and delta CRL schedules CRLs are generated periodically. Setting that period is touched on in the configuration in Section 6.3.2, "Configuring CRLs for each issuing point" . CRLs are issued according to a time-based schedule. CRLs can be issued every single time a certificate is revoked, at a specific time of day, or once every so-many minutes. Time-based CRL generation schedules apply to every CRL that is generated. There are two kinds of CRLs, full CRLs and delta CRLs. A full CRL has a record of every single revoked certificate, whereas delta CRLs contain only the certificates that have been revoked since the last CRL (delta or full) was generated. By default, full CRLs are generated at every specified interval in the schedule. It is possible space out the time between generating full CRLs by generating interim delta CRLs. The generation interval is configured in the CRL schema , which sets the scheme for generating delta and full CRLs. If the interval is set to 3, for example, then the first CRL generated will be both a full and delta CRL, then the two generation updates are delta CRLs only, and then the fourth interval is both a full and delta CRL again. In other words, every third generation interval has both a full CRL and a delta CRL. NOTE For delta CRLs to be generated in addition to full CRLs, the CRL cache must be enabled. 6.4.1. Configuring CRL update intervals in the console Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. Open the console. In the Configuration tab, expand the Certificate Manager folder and the CRL Issuing Points subfolder. Select the MasterCRL node. Enter the required interval in the Generate full CRL every # delta(s) field. Set the update frequency, either by specifying the occasion of a certificate revocation, a cyclical interval or set times for the updates to occur: Select the Update CRL every time a certificate is revoked or released from hold checkbox. The Update CRL every time a certificate is revoked or released from hold option also requires you to fill out the two Grace period settings. This is a known issue, and the bug is being tracked in Red Hat Bugzilla. Select the Update CRL every time a certificate is revoked or released from hold checkbox. Select the Update CRL at checkbox and enter specific times separated by commas, such as 01:50,04:55,06:55 . Select the Update CRL every checkbox and enter the required interval, such as 240 . Save the changes. IMPORTANT The Update CRL every time a certificate is revoked or released from hold option also requires you to fill out the two grace period settings. This is a known issue, and the bug is being tracked in Red Hat Bugzilla. NOTE Schedule drift can occur when updating CRLs by interval. Typically, drift occurs as a result of manual updates and CA restarts. To prevent schedule drift, select the Update CRL at checkbox and enter a value. The interval updates will resynchronize with the Update CRL at value every 24 hours. Only one Update CRL at value will be accepted when updating CRLs by interval. 6.4.2. Configuring update intervals for CRLs in CS.cfg For instructions on how to configure this feature by editing the CS.cfg file, see 9.2.3.12 Configuring Update Intervals for CRLs in CS.cfg in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 6.4.3. Configuring CRL generation schedules over multiple days By default, CRL generation schedules cover 24 hours. Also, by default, when full and delta CRLs are enabled full CRLs occur at specific intervals in place of one or all delta CRLs, i.e., every third update. To set CRL generation schedules across multiple days, the list of times uses commas to separate times within the same day and a semicolon to delimit days: This example updates CRLs on day one of the schedule at 01:00, 03:00, and 18:00, and on day two of the schedule at 02:00, 05:00, and 17:00. On day three the cycle starts again. NOTE The semicolon indicates a new day. Starting the list with a semicolon results in an initial day where no CRLs are generated. Likewise, ending the list with a semicolon adds a final day to the schedule where no CRLs are generated. Two semicolons together result in a day with no CRL generation. To set full CRL updates independent of delta updates, the list of times accepts time values prepended with an asterisk to indicate when full CRL updates should occur: This example generates delta CRL updates on day one at 01:00, 03:00, and 18:00, with a full and delta CRL update at 23:00. On day two, delta CRLs are updated at 02:00, 05:00, and 21:00, with a full and delta CRL update at 23:30. On day three, the cycle starts again. NOTE The semicolon and asterisk syntax works in both the pkiconsole and when manually editing the CS.cfg file. 6.5. Enabling Revocation Checking Revocation checking means that a Certificate System subsystem verifies that a certificate is both valid and not revoked when an agent or administrator attempts to access the instance's secure interfaces. This leverages a local OCSP service (either a CA's internal OCSP service or a separate OCSP responder) to check the revocation status of the certificate. OCSP configuration is covered in Section 6.6, "Using the Online Certificate Status Protocol (OCSP) responder" . See 9.4.1.2 Enabling Automatic Revocation Checking on the CA in the Planning, Installation and Deployment Guide (Common Criteria Edition) . See 9.4.1.3 Enabling Certificate Revocation Checking for Subsystems in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 6.6. Using the Online Certificate Status Protocol (OCSP) responder 6.6.1. Setting up the OCSP responder Red Hat Certificate System offers two methods of CRL publishing to be consumed by an OCSP instance external to the CA: Direct CA->OCSP CRL publishing Indirect publishing with CA->LDAP, then OCSP<-LDAP By default, once you set up an OCSP instance, the first CRL publishing method is automatically set up as well, which allows direct CA->OCSP CRL publishing. Note The second publishing method is the one evaluated for Common Criteria. For an example setup, see 7.4.7 "Configuration for CRL publishing" in the Planning, Installation and Deployment Guide (Common Criteria Edition) . If you select a CA within the security domain when configuring the Online Certificate Status Manager, there is no extra step required to configure the OCSP service. The CA's CRL publishing is set up automatically, and its signing certificate is automatically added and trusted in the Online Certificate Status Manager's certificate database. However, if you select a non-security domain CA, then you must manually configure the OCSP service after configuring the Online Certificate Status Manager. NOTE Not every CA within the security domain to which the OCSP Manager belongs is automatically trusted by the OCSP Manager when it is configured. Every CA in the certificate chain of the CA configured in the CA panel is trusted automatically by the OCSP Manager. Other CAs within the security domain but not in the certificate chain must be trusted manually. To set up the Online Certificate Status Manager for a Certificate Manager outside the security domain: Configure the CRLs for every CA that will publish to an OCSP responder. Enable publishing, set up a publisher, and set publishing rules in every CA that the OCSP service will handle ( Chapter 7, Publishing certificates and CRLs ). This is not necessary if the Certificate Managers publish to an LDAP directory and the Online Certificated Status Manager is set up to read from that directory. The certificate profiles must be configured to include the Authority Information Access extension, pointing to the location at which the Certificate Manager listens for OCSP service requests ( Section 6.6.4, "Enabling the Certificate Manager's internal OCSP service" ). Configure the OCSP Responder. Configure the Revocation Info store ( Section 6.6.2.2, "Configure the Revocation Info Stores: internal database" and Section 6.6.2.3, "Configure the Revocation Info Stores: LDAP directory" ). Identify every publishing Certificate Manager to the OCSP responder ( Section 6.6.2, "Identifying the CA to the OCSP Responder" ). If necessary, configure the trust settings for the CA which signed the OCSP signing certificate ( Section 13.6, "Changing the trust settings of a CA certificate" ). Restart both subsystems after configuring them. Verify that the CA is properly connected to the OCSP responder ( Section 6.6.2.1, "Verify Certificate Manager and Online Certificate Status Manager connection" ). 6.6.2. Identifying the CA to the OCSP Responder Before a CA is configured to publish CRLs to the Online Certificate Status Manager, the CA must be identified to the Online Certificate Status Manager by storing the CA signing certificate in the internal database of the Online Certificate Status Manager. The Certificate Manager signs CRLs with the key pair associated with this certificate; the Online Certificate Status Manager verifies the signature against the stored certificate. NOTE If a CA within the security domain is selected when the Online Certificate Status Manager is configured, there is no extra step required to configure the Online Certificate Status Manager to recognize the CA; the CA signing certificate is automatically added and trusted in the Online Certificate Status Manager's certificate database. However, if a non-security domain CA is selected, then the CA signing certificate must be manually added to the certificate database after the Online Certificate Status Manager is configured. It is not necessary to import the certificate chain for a CA which will publish its CRL to the Online Certificate Status Manager. The only time a certificate chain is needed for the OCSP service is if the CA connects to the Online Certificate Status Manager through SSL/TLS authentication when it publishes its CRL. Otherwise, the Online Certificate Status Manager does not need to have the complete certificate chain. However, the Online Certificate Status Manager must have the certificate which signed the CRL, either a CA signing certificate or a separate CRL signing certificate, in its certificate database. The OCSP service verifies the CRL by comparing the certificate which signed the CRL against the certificates in its database, not against a certificate chain. If both a root CA and one of its subordinate CAs publish CRLs to the Online Certificate Status Manager, the Online Certificate Status Manager needs the CA signing certificate of both CAs. To import the CA or CRL signing certificate which is used to sign the certificates the CA is publishing to the Online Certificate Status Manager, do the following: Get the Certificate Manager's base-64 CA signing certificate from the end-entities page of the CA. Open the Online Certificate Status Manager agent page. The URL has the format https:// hostname:SSLport /ocsp/agent/ocsp . In the left frame, click Add Certificate Authority . In the form, paste the encoded CA signing certificate inside the text area labeled Base 64 encoded certificate (including the header and footer) . To verify that the certificate is added successfully, in the left frame, click List Certificate Authorities . The resulting form should show information about the new CA. The This Update , Update , and Requests Served Since Startup fields should show a value of zero (0). 6.6.2.1. Verify Certificate Manager and Online Certificate Status Manager connection When the Certificate Manager is restarted, it tries to connect to the Online Certificate Status Manager's SSL/TLS port. To verify that the Certificate Manager did indeed communicate with the Online Certificate Status Manager, check the This Update and Update fields, which should be updated with the appropriate timestamps of the CA's last communication with the Online Certificate Status Manager. The Requests Served Since Startup field should still show a value of zero (0) since no client has tried to query the OCSP service for certificate revocation status. 6.6.2.2. Configure the Revocation Info Stores: internal database The Online Certificate Status Manager stores each Certificate Manager's CRL in its internal database and uses it as the CRL store for verifying the revocation status of certificates. To change the configuration that the Online Certificate Status Manager uses for storing the CRLs in its internal database: Open the Online Certificate Status Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Online Certificate Status Manager , and then select Revocation Info Stores . The right pane shows the two repositories the Online Certificate Status Manager can use; by default, it uses the CRL in its internal database. Select the defStore , and click Edit/View . Edit the defStore values. notFoundAsGood. Sets the OCSP service to return an OCSP response of GOOD if the certificate in question cannot be found in any of the CRLs. If this is not selected, the response is UNKNOWN, which, when encountered by a client, results in an error message. byName. The OCSP Responder only supports the basic response type, which includes the ID of the OCSP Responder making the response. The ResponderID field within the basic response type is determined by the value of the ocsp.store.defStore.byName parameter. If byName parameter is true or is missing, the OCSP authority signing certificate subject name is used as the ResponderID field of the OCSP response. If byName parameter is false, the OCSP authority signing certificate key hash will be the ResponderID field of the OCSP response. includeNextUpdate. Includes the timestamp of the CRL update time. 6.6.2.3. Configure the Revocation Info Stores: LDAP directory Although the OCSP Manager stores the CA CRLs in its internal database by default, you can configure it to use a CRL published to an LDAP directory instead. When configuring the OCSP manager to use an LDAP directory, you need to disable the default direct CA->OCSP CRL publishing method. To do so, please refer to 9.2.3.17 "Disabling the direct CA-OCSP CRL publishing" in the Planning, Installation and Deployment Guide (Common Criteria Edition) . Important By default, if the ldapStore method is enabled, the OCSP user interface does not check the certificate status. However, the OCSP subsystem can take advantage of having the frequently updated CRL to verify its peer's certificate without reaching out to another OCSP system. To do so, please refer to 9.2.3.18 "Enabling Client Certificate Verification using latest CRL within OCSP" in the Planning, Installation and Deployment Guide (Common Criteria Edition) . To configure the Online Certificate Status Manager to use an LDAP directory: Open the Online Certificate Status Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Online Certificate Status Manager , and then select Revocation Info Stores . The right pane shows the two repositories the Online Certificate Status Manager can use; by default, it uses the CRL in its internal database. To use the CRLs in LDAP directories, click Set Default to enable the ldapStore option. Select ldapStore , and click Edit/View . Set the ldapStore parameters. numConns. The total number of LDAP directories the OCSP service should check. By default, this is set to 0. Setting this value shows the corresponding number of host , port , baseDN , and refreshInSec fields. host. The fully-qualified DNS hostname of the LDAP directory. port. The non-SSL/TLS port of the LDAP directory. baseDN. The DN to start searching for the CRL. For example, O=example.com . refreshInSec. How often the connection is refreshed. The default is 86400 seconds (daily). caCertAttr. Leave the default value, cACertificate;binary , as it is. It is the attribute to which the Certificate Manager publishes its CA signing certificate. crlAttr. Leave the default value, certificateRevocationList;binary , as it is. It is the attribute to which the Certificate Manager publishes CRLs. notFoundAsGood. Sets the OCSP service to return an OCSP response of GOOD if the certificate in question cannot be found in any of the CRLs. If this is not selected, the response is UNKNOWN, which, when encountered by a client, results in an error message. byName. The OCSP Responder only supports the basic response type, which includes the ID of the OCSP Responder making the response. The ResponderID field within the basic response type is determined by the value of the ocsp.store.defStore.byName parameter. If byName parameter is true or is missing, the OCSP authority signing certificate subject name is used as the ResponderID field of the OCSP response. If byName parameter is false, the OCSP authority signing certificate key hash will be the ResponderID field of the OCSP response. includeNextUpdate. The Online Certificate Status Manager can include the timestamp of the CRL update time. 6.6.2.4. Configure the default OCSP response signing algorithm To configure the Online Certificate Status Manager to use a different OCSP response signing algorithm: Open the Online Certificate Status Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Online Certificate Status Manager . In General Settings , select the desired signing algorithm from the pulldown list. For example SHA384withRSA . Note SHA1 algorithms are no longer supported. 6.6.2.5. Testing the OCSP service setup Test whether the Certificate Manager can service OCSP requests properly by doing the following: Turn on revocation checking in the browser or client. Request a certificate from the CA that has been enabled for OCSP services. Approve the request. Download the certificate to the browser or client. Make sure the CA is trusted by the browser or client. Check the status of Certificate Manager's internal OCSP service. Open the CA agent services page, and select the OCSP Services link. Test the independent Online Certificate Status Manager subsystem. Open the Online Certificate Status Manager agent services page, and click the List Certificate Authorities link. The page should show information about the Certificate Manager configured to publish CRLs to the Online Certificate Status Manager. The page also summarizes the Online Certificate Status Manager's activity since it was last started. Revoke the certificate. Verify the certificate in the browser or client. The server should return that the certificate has been revoked. Check the Certificate Manager's OCSP-service status again to verify that these things happened: The browser sent an OCSP query to the Certificate Manager. The Certificate Manager sent an OCSP response to the browser. The browser used that response to validate the certificate and returned its status, that the certificate could not be verified. Check the independent OCSP service subsystem again to verify that these things happened: The Certificate Manager published the CRL to the Online Certificate Status Manager. The browser sent an OCSP response to the Online Certificate Status Manager. The Online Certificate Status Manager sent an OCSP response to the browser. The browser used that response to validate the certificate and returned its status, that the certificate could not be verified. 6.6.3. Setting the response for bad serial numbers OCSP responders check the revocation status and expiration date of a certificate before determining whether the certificate is valid; by default, the OCSP does not validate other information on the certificate. The notFoundAsGood parameter sets how the OCSP handles a certificate with an invalid serial number. This parameter is enabled by default, which means that if a certificate is present with a bad serial number but the certificate is otherwise valid, the OCSP returns a status of GOOD for the certificate. To have the OCSP check and reject certificates based on bad serial numbers as well as revocation status, change the notFoundAsGood setting. In that case, the OCSP returns a status of UNKNOWN with a certificate with a bad serial number. The client interprets that as an error and can respond accordingly. Open the Online Certificate Status Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Online Certificate Status Manager , and then select Revocation Info Stores . Select the defStore , and click Edit/View . Edit the notFoundAsGood value. Selecting the checkbox means that the OCSP returns a value of GOOD even if the serial number on the certificate is bad. Unselecting the checkbox means that the OCSP sends a value of UNKNOWN , which the client can intrepret as an error. Restart the OCSP Manager. 6.6.4. Enabling the Certificate Manager's internal OCSP service The Certificate Manager has a built-in OCSP service, which can be used by OCSP-compliant clients to query the Certificate Manager directly about the revocation status of the certificate. When the Certificate Manager is installed, an OCSP signing certificate is issued and the OCSP service is turned on by default. This OCSP signing certificate is used to sign all responses to OCSP service requests. Since the internal OCSP service checks the status of certificates stored in the Certificate Manager's internal database, publishing does not have to be configured to use this service. Clients can query the OCSP service through the non-SSL/TLS end-entity port of the Certificate Manager. When queried for the revocation status of a certificate, the Certificate Manager searches its internal database for the certificate, checks its status, and responds to the client. Since the Certificate Manager has real-time status of all certificates it has issued, this method of revocation checking is the most accurate. Every CA's built-in OCSP service is turned on at installation. However, to use this service, the CA needs to issue certificates with the Authority Information Access extension. Go to the CA's end-entities page. For example: Find the CA signing certificate. Look for the Authority Info Access extension in the certificate, and note the Location URIName value, such as https://server.example.com:8443/ca/ocsp . Update the enrollment profiles to enable the Authority Information Access extension, and set the Location parameter to the Certificate Manager's URI. For information on editing the certificate profiles, see Section 3.2, "Setting up certificate profiles" . Restart the CA instance. Note To disable the Certificate Manager's internal OCSP service, edit the CA's CS.cfg file and change the value of the ca.ocsp parameter to false . 6.6.5. Submitting OCSP requests using the OCSPClient program The OCSPClient program can be used for performing OCSP requests. For example: The OCSPClient command can be used with the following command-line options: Table 6.1. Available OCSPClient options Option Description -d database Security database location (default: current directory) -h hostname OCSP server hostname (default: example.com) -p port OCSP server port number (default: 8080) -t path OCSP service path (default: /ocsp/ee/ocsp) -c nickname CA certificate nickname (defaut: CA Signing Certificate) -n times Number of submissions (default: 1) --serial serial_number Serial number of certificate to be checked --input input_file Input file containing DER-encoded OCSP request --output output_file Output file to store DER-encoded OCSP response -v, --verbose Run in verbose mode --help Show help message 6.6.6. Submitting OCSP requests using the GET method OCSP requests which are smaller than 255 bytes can be submitted to the Online Certificate Status Manager using a GET method, as described in RFC 6960. To submit OCSP requests over GET: Generate an OCSP request for the certificate the status of which is being queried. For example: Paste the URL in the address bar of a web browser to return the status information. The browser must be able to handle OCSP requests and downloads the response file to the system. . The possible statuses are GOOD , REVOKED , and UNKNOWN . Parse the response using the openssl tool: Alternatively, run the OCSP from the command line by using a tool such as curl to send the request and openssl to parse the response. For example: Generate an OCSP request for the certificate whose status is being queried. For example: Connect to the OCSP Manager using curl to send the OCSP request. Parse the response using the openssl tool:
[ "#numRequests: Total number of PKCS10 requests or CRMF requests. numRequests=1 #output: full path for the CMC request in binary format output=/home/user_name/cmc.revoke.userSigned.req #tokenname: name of token where user signing cert can be found #(default is internal) tokenname=internal #nickname: nickname for user signing certificate which will be used #to sign the CMC full request. nickname=signer_user_certificate #dbdir: directory for cert9.db, key4.db and pkcs11.txt dbdir=/home/user_name/.dogtag/nssdb/ #password: password for cert9.db which stores the user signing #certificate and keys password=myPass #format: request format, either pkcs10 or crmf. format=pkcs10 ## revocation parameters revRequest.enable=true revRequest.serial=45 revRequest.reason=unspecified revRequest.comment=user test revocation revRequest.issuer=issuer revRequest.sharedSecret=shared_secret", "CMCRequest /home/user_name/cmc-request.cfg", "#host: host name for the http server host=>server.example.com #port: CA port number port=8443 #secure: true for secure connection, false for nonsecure connection secure=true #input: full path for the enrollment request, the content must be #in binary format input=/home/user_name/cmc.revoke.userSigned.req #output: full path for the response in binary format output=/home/user_name/cmc.revoke.userSigned.resp #tokenname: name of token where SSL client authentication certificate #can be found (default is internal) #This parameter will be ignored if secure=false tokenname=internal #dbdir: directory for cert9.db, key4.db and pkcs11.txt #This parameter will be ignored if secure=false dbdir=/home/user_name/.dogtag/nssdb/ #clientmode: true for client authentication, false for no client #authentication. This parameter will be ignored if secure=false clientmode=true #password: password for cert9.db #This parameter will be ignored if secure=false and clientauth=false password=password #nickname: nickname for client certificate #This parameter will be ignored if clientmode=false nickname=signer_user_certificate", "HttpClient /home/user_name/cmc-submit.cfg", "CMCRevoke -d/path/to/AgentCertDb -pPasswordToDb -nNickname -iIssuerName -sDecimalSerialNumber -mReasonToRevoke -cComment*", "CMCRevoke -d\"~jsmith/.mozilla/firefox/\" -pPass -n\"AgentCert\" -i\"cn=agentAuthMgr\" -s22 -m0 -c\"test comment\"", "https://server.example.com:8443/ca/ee/ca", "These two reasons are not the only ones why a certificate would need revoked; there are six reasons available for revoking a certificate.", "pki-server ca-config-set -i rhcs10-RSA-RootCA <param name> <param value>", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca", "01:50,04:55,06:55;02:00,05:00,17:00", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca", "Interval 1, 2, 3, 4, 5, 6, 7 Full CRL 1 4 7 Delta CRL 1, 2, 3, 4, 5, 6, 7", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca", "ca.crl.MasterCRL.dailyUpdates=01:00,03:00,18:00;02:00,05:00,17:00", "ca.crl.MasterCRL.dailyUpdates=01:00,03:00,18:00,*23:00;02:00,05:00,21:00,*23:30", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ocsp", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ocsp", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ocsp", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ocsp", "pki-server restart instance-name", "https://server.example.com:8443/ca/ee/ca", "pki-server restart instance-name", "ca.ocsp=false", "OCSPClient -h server.example.com -p 8080 -d /etc/pki/pki-tomcat/alias -c \"caSigningCert cert-pki-ca\" --serial 2 CertID.serialNumber=2 CertStatus=Good", "openssl ocsp -CAfile ca.pem -issuer issuer.pem -url https://rhcs10.example.com:22443/ocsp/ee/ocsp -serial 16836380 -reqout - | base64 | tr -d '\\n' MEIwQDA+MDwwOjAJBgUrDgMCGgUABBT4cyABkyiCIhU4JpmIBewdDnn8ZgQUbyBZ44kgy35o7xW5BMzM8FTvyTwCAQE=", "https://rhcs10.example.com:22443/ocsp/ee/ocsp/MEIwQDA+MDwwOjAJBgUrDgMCGgUABBT4cyABkyiCIhU4JpmIBewdDnn8ZgQUbyBZ44kgy35o7xW5BMzM8FTvyTwCAQE=", "openssl ocsp -respin <ocsp_response_file> -resp_text", "openssl ocsp -CAfile ca.pem -issuer issuer.pem -url https://rhcs10.example.com:22443/ocsp/ee/ocsp -serial 16836380 -reqout - | base64 | tr -d '\\n' MEIwQDA+MDwwOjAJBgUrDgMCGgUABBT4cyABkyiCIhU4JpmIBewdDnn8ZgQUbyBZ44kgy35o7xW5BMzM8FTvyTwCAQE=", "curl --cacert cert.pem https://rhcs10.example.com:22443/ocsp/ee/ocsp/MEIwQDA+MDwwOjAJBgUrDgMCGgUABBT4cyABkyiCIhU4JpmIBewdDnn8ZgQUbyBZ44kgy35o7xW5BMzM8FTvyTwCAQE= > ocspresp.der", "openssl ocsp -respin ocspresp.der -resp_text" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/revocation_and_crls
Chapter 8. Installing a private cluster on IBM Cloud
Chapter 8. Installing a private cluster on IBM Cloud In OpenShift Container Platform version 4.17, you can install a private cluster into an existing VPC. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.3. Private clusters in IBM Cloud To create a private cluster on IBM Cloud(R), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 8.3.1. Limitations Private clusters on IBM Cloud(R) are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 8.4. About using a custom VPC In OpenShift Container Platform 4.17, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 8.4.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 8.4.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 8.4.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 8.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a bastion host on your cloud network or a machine that has access to the to the network through a VPN. For more information about private cluster installation requirements, see "Private clusters". Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 8.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 8.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.9.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 8.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 gx3d-160x1792x8h100 mx2-8x64 mx2d-4x32 mx3d-4x40 ox2-8x64 ux2d-2x56 vx2d-4x56 8.9.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-cluster-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 1 8 12 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 15 Specify the name of an existing VPC. 16 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. The default value is External . 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.9.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 8.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.12. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 8.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 8.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-cluster-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_cloud/installing-ibm-cloud-private
Chapter 15. Provisioning Cloud Instances on Google Compute Engine
Chapter 15. Provisioning Cloud Instances on Google Compute Engine Red Hat Satellite can interact with Google Compute Engine (GCE), including creating new virtual machines and controlling their power management states. You can only use golden images supported by Red Hat with Satellite for creating GCE hosts. Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. In your GCE project, configure a service account with the necessary IAM Compute role. For more information, see Compute Engine IAM roles in the GCE documentation. In your GCE project-wise metadata, set the enable-oslogin to FALSE . For more information, see Enabling or disabling OS Login in the GCE documentation. Optional: If you want to use Puppet with GCE hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs. Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information about provisioning templates, see Provisioning Templates in Provisioning Hosts . 15.1. Installing Google GCE Plugin Install the Google GCE plugin to attach an GCE compute resource provider to Satellite. This allows you to manage and deploy hosts to GCE. Procedure Install the Google GCE compute resource provider on your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the Compute Resources tab to verify the installation of the Google GCE plugin. 15.2. Adding a Google GCE Connection to Satellite Server Use this procedure to add Google Compute Engine (GCE) as a compute resource in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In Google GCE, generate a service account key in JSON format. Copy the file from your local machine to Satellite Server: On Satellite Server, change the owner for your service account key to the foreman user: On Satellite Server, configure permissions for your service account key to ensure that the file is readable: On Satellite Server, restore SELinux context for your service account key: In the Satellite web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource . In the Name field, enter a name for the compute resource. From the Provider list, select Google . Optional: In the Description field, enter a description for the resource. In the Google Project ID field, enter the project ID. In the Client Email field, enter the client email. In the Certificate Path field, enter the path to the service account key. For example, /usr/share/foreman/ gce_key .json . Click Load Zones to populate the list of zones from your GCE environment. From the Zone list, select the GCE zone to use. Click Submit . CLI procedure In Google GCE, generate a service account key in JSON format. Copy the file from your local machine to Satellite Server: On Satellite Server, change the owner for your service account key to the foreman user: On Satellite Server, configure permissions for your service account key to ensure that the file is readable: On Satellite Server, restore SELinux context for your service account key: Use the hammer compute-resource create command to add a GCE compute resource to Satellite: 15.3. Adding Google Compute Engine Images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the Google Compute Engine connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. Specify a user other than root , because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers. From the Image list, select an image from the Google Compute Engine compute resource. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. With the --username option, specify a user other than root , because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers. 15.4. Adding Google GCE Details to a Compute Profile Use this procedure to add Google GCE hardware settings to a compute profile. When you create a host on Google GCE using this compute profile, these settings are automatically populated. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the GCE compute resource. From the Machine Type list, select the machine type to use for provisioning. From the Image list, select the image to use for provisioning. From the Network list, select the Google GCE network to use for provisioning. Optional: Select the Associate Ephemeral External IP checkbox to assign a dynamic ephemeral IP address that Satellite uses to communicate with the host. This public IP address changes when you reboot the host. If you need a permanent IP address, reserve a static public IP address on Google GCE and attach it to the host. In the Size (GB) field, enter the size of the storage to create on the host. Click Submit to save the compute profile. CLI procedure Create a compute profile to use with the Google GCE compute resource: Add GCE details to the compute profile: 15.5. Creating Image-based Hosts on Google Compute Engine In Satellite, you can use Google Compute Engine provisioning to create hosts from an existing image. The new host entry triggers the Google Compute Engine server to create the instance using the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context. From the Host Group list, select the host group that you want to use to populate the form. From the Deploy on list, select the Google Compute Engine connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. From the Lifecycle Environment list, select the environment. Click the Interfaces tab and click Edit on the host's interface. Verify that the fields are automatically populated, particularly the following items: The Name from the Host tab becomes the DNS name . The MAC address field is blank. Google Compute Engine assigns a MAC address to the host during provisioning. Satellite Server automatically assigns an IP address for the new host. The Domain field is populated with the required domain. The Managed , Primary , and Provision options are automatically selected for the first interface on the host. If not, select them. Click the Operating System tab, and confirm that all fields automatically contain values. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure Create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 15.6. Deleting a VM on Google GCE You can delete VMs running on Google GCE on your Satellite Server. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Google GCE provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Google GCE compute resource while retaining any associated hosts within Satellite. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually. 15.7. Uninstalling Google GCE Plugin If you have previously installed the Google GCE plugin but don't use it anymore to manage and deploy hosts to GCE, you can uninstall it from your Satellite Server. Procedure Uninstall the GCE compute resource provider from your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the Available Providers tab to verify the removal of the Google GCE plugin.
[ "satellite-installer --enable-foreman-compute-gce", "scp gce_key.json [email protected]:/usr/share/foreman/gce_key.json", "chown foreman /usr/share/foreman/ gce_key .json", "chmod 0600 /usr/share/foreman/ gce_key .json", "restorecon -vv /usr/share/foreman/ gce_key .json", "scp gce_key.json [email protected]:/usr/share/foreman/gce_key.json", "chown foreman /usr/share/foreman/ gce_key .json", "chmod 0600 /usr/share/foreman/ gce_key .json", "restorecon -vv /usr/share/foreman/ gce_key .json", "hammer compute-resource create --email \" My_GCE_Email \" --key-path \" Path_To_My_GCE_Key.json \" --name \" My_GCE_Compute_Resource \" --project \" My_GCE_Project_ID \" --provider \"gce\" --zone \" My_Zone \"", "hammer compute-resource image create --name ' gce_image_name ' --compute-resource ' gce_cr ' --operatingsystem-id 1 --architecture-id 1 --uuid ' 3780108136525169178 ' --username ' admin '", "hammer compute-profile create --name My_GCE_Compute_Profile", "hammer compute-profile values create --compute-attributes \"machine_type=f1-micro,associate_external_ip=true,network=default\" --compute-profile \" My_GCE_Compute_Profile \" --compute-resource \" My_GCE_Compute_Resource \" --volume \" size_gb=20 \"", "hammer host create --architecture x86_64 --compute-profile \" gce_profile_name \" --compute-resource \" My_GCE_Compute_Resource \" --image \" My_GCE_Image \" --interface \"type=interface,domain_id=1,managed=true,primary=true,provision=true\" --location \" My_Location \" --name \" GCE_VM \" --operatingsystem \" My_Operating_System \" --organization \" My_Organization \" --provision-method 'image' --puppet-ca-proxy-id 1 --puppet-environment-id 1 --puppet-proxy-id 1 --root-password \" My_Root_Password \"", "yum remove -y foreman-gce satellite-installer --no-enable-foreman-compute-gce" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/provisioning_cloud_instances_on_google_compute_engine_provisioning
Chapter 11. Secondary networks
Chapter 11. Secondary networks You can configure the Network Observability Operator to collect and enrich network flow data from secondary networks, such as SR-IOV and OVN-Kubernetes. Prerequisites Access to an OpenShift Container Platform cluster with an additional network interface, such as a secondary interface or an L2 network. 11.1. Configuring monitoring for SR-IOV interface traffic In order to collect traffic from a cluster with a Single Root I/O Virtualization (SR-IOV) device, you must set the FlowCollector spec.agent.ebpf.privileged field to true . Then, the eBPF agent monitors other network namespaces in addition to the host network namespaces, which are monitored by default. When a pod with a virtual functions (VF) interface is created, a new network namespace is created. With SRIOVNetwork policy IPAM configurations specified, the VF interface is migrated from the host network namespace to the pod network namespace. Prerequisites Access to an OpenShift Container Platform cluster with a SR-IOV device. The SRIOVNetwork custom resource (CR) spec.ipam configuration must be set with an IP address from the range that the interface lists or from other plugins. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for SR-IOV monitoring apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1 1 The spec.agent.ebpf.privileged field value must be set to true to enable SR-IOV monitoring. Additional resources * Creating an additional SR-IOV network attachment with the CNI VRF plugin . 11.2. Configuring virtual machine (VM) secondary network interfaces for Network Observability You can observe network traffic on an OpenShift Virtualization setup by identifying eBPF-enriched network flows coming from VMs that are connected to secondary networks, such as through OVN-Kubernetes. Network flows coming from VMs that are connected to the default internal pod network are automatically captured by Network Observability. Procedure Get information about the virtual machine launcher pod by running the following command. This information is used in Step 5: USD oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.2.39" ], "mac": "0a:58:0a:81:02:27", "default": true, "dns": {} }, { "name": "my-vms/l2-network", 1 "interface": "podc0f69e19ba2", 2 "ips": [ 3 "10.10.10.15" ], "mac": "02:fb:f8:00:00:12", 4 "dns": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: # ... status: # ... 1 The name of the secondary network. 2 The network interface name of the secondary network. 3 The list of IPs used by the secondary network. 4 The MAC address used for secondary network. In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure FlowCollector based on the information you found from the additional network investigation: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4 # ... <.> Ensure that the eBPF agent is in privileged mode so that flows are collected for secondary interfaces. <.> Define the fields to use for indexing the virtual machine launcher pods. It is recommended to use the MAC address as the indexing field to get network flows enrichment for secondary interfaces. If you have overlapping MAC address between pods, then additional indexing fields, such as IP and Interface , could be added to have accurate enrichment. <.> If your additional network information has a MAC address, add MAC to the field list. <.> Specify the name of the network found in the k8s.v1.cni.cncf.io/network-status annotation. Usually <namespace>/<network_attachement_definition_name>. Observe VM traffic: Navigate to the Network Traffic page. Filter by Source IP using your virtual machine IP found in k8s.v1.cni.cncf.io/network-status annotation. View both Source and Destination fields, which should be enriched, and identify the VM launcher pods and the VM instance as owners.
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1", "oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_observability/network-observability-secondary-networks
14.4.5. Server Security Mode (User-Level Security)
14.4.5. Server Security Mode (User-Level Security) Server security mode was previously used when Samba was not capable of acting as a domain member server. Note It is highly recommended to not use this mode since there are numerous security drawbacks. In smb.conf , the following directives enable Samba to operate in server security mode:
[ "[GLOBAL] encrypt passwords = Yes security = server password server = \"NetBIOS_of_Domain_Controller\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-server-security-mode
Chapter 3. Differences between OpenShift Container Platform 3 and 4
Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.10 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.10 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.10, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.10 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.10, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.10 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.10. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.10. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.10 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.10 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.10: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.10. For more information, see Understanding persistent storage . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.10. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.10 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.10 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.10, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.10. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. For more information, see Installing OpenShift Logging . Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. For more information, see About OpenShift Logging . Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.10. For more information on the explicitly unsupported logging cases, see Maintenance and support . 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.10. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.10. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.10 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.10. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. 3.3.5. Monitoring considerations Review the following monitoring changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.10. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Applying custom Alertmanager configuration .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migrating_from_version_3_to_4/planning-migration-3-4
Chapter 2. NVIDIA GPU architecture
Chapter 2. NVIDIA GPU architecture NVIDIA supports the use of graphics processing unit (GPU) resources on OpenShift Container Platform. OpenShift Container Platform is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. OpenShift Container Platform includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads. The NVIDIA GPU Operator uses the Operator framework within OpenShift Container Platform to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads. These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others. Note The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . 2.1. NVIDIA GPU prerequisites A working OpenShift cluster with at least one GPU worker node. Access to the OpenShift cluster as a cluster-admin to perform the required steps. OpenShift CLI ( oc ) is installed. The node feature discovery (NFD) Operator is installed and a nodefeaturediscovery instance is created. 2.2. NVIDIA GPU enablement The following diagram shows how the GPU architecture is enabled for OpenShift: Figure 2.1. NVIDIA GPU enablement Note MIG is only supported with A30, A100, A100X, A800, AX800, H100, and H800. 2.2.1. GPUs and bare metal You can deploy OpenShift Container Platform on an NVIDIA-certified bare metal server but with some limitations: Control plane nodes can be CPU nodes. Worker nodes must be GPU nodes, provided that AI/ML workloads are executed on these worker nodes. In addition, the worker nodes can host one or more GPUs, but they must be of the same type. For example, a node can have two NVIDIA A100 GPUs, but a node with one A100 GPU and one T4 GPU is not supported. The NVIDIA Device Plugin for Kubernetes does not support mixing different GPU models on the same node. When using OpenShift, note that one or three or more servers are required. Clusters with two servers are not supported. The single server deployment is called single node openShift (SNO) and using this configuration results in a non-high availability OpenShift environment. You can choose one of the following methods to access the containerized GPUs: GPU passthrough Multi-Instance GPU (MIG) Additional resources Red Hat OpenShift on Bare Metal Stack 2.2.2. GPUs and virtualization Many developers and enterprises are moving to containerized applications and serverless infrastructures, but there is still a lot of interest in developing and maintaining applications that run on virtual machines (VMs). Red Hat OpenShift Virtualization provides this capability, enabling enterprises to incorporate VMs into containerized workflows within clusters. You can choose one of the following methods to connect the worker nodes to the GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time-slicing, when GPU compute capacity is not saturated by workloads. Additional resources NVIDIA GPU Operator with OpenShift Virtualization 2.2.3. GPUs and vSphere You can deploy OpenShift Container Platform on an NVIDIA-certified VMware vSphere server that can host different GPU types. An NVIDIA GPU driver must be installed in the hypervisor in case vGPU instances are used by the VMs. For VMware vSphere, this host driver is provided in the form of a VIB file. The maximum number of vGPUS that can be allocated to worker node VMs depends on the version of vSphere: vSphere 7.0: maximum 4 vGPU per VM vSphere 8.0: maximum 8 vGPU per VM Note vSphere 8.0 introduced support for multiple full or fractional heterogenous profiles associated with a VM. You can choose one of the following methods to attach the worker nodes to the GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing, when not all of the GPU is needed Similar to bare metal deployments, one or three or more servers are required. Clusters with two servers are not supported. Additional resources OpenShift Container Platform on VMware vSphere with NVIDIA vGPUs 2.2.4. GPUs and Red Hat KVM You can use OpenShift Container Platform on an NVIDIA-certified kernel-based virtual machine (KVM) server. Similar to bare-metal deployments, one or three or more servers are required. Clusters with two servers are not supported. However, unlike bare-metal deployments, you can use different types of GPUs in the server. This is because you can assign these GPUs to different VMs that act as Kubernetes nodes. The only limitation is that a Kubernetes node must have the same set of GPU types at its own level. You can choose one of the following methods to access the containerized GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing when not all of the GPU is needed To enable the vGPU capability, a special driver must be installed at the host level. This driver is delivered as a RPM package. This host driver is not required at all for GPU passthrough allocation. Additional resources How To Deploy OpenShift Container Platform 4.13 on KVM 2.2.5. GPUs and CSPs You can deploy OpenShift Container Platform to one of the major cloud service providers (CSPs): Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Two modes of operation are available: a fully managed deployment and a self-managed deployment. In a fully managed deployment, everything is automated by Red Hat in collaboration with CSP. You can request an OpenShift instance through the CSP web console, and the cluster is automatically created and fully managed by Red Hat. You do not have to worry about node failures or errors in the environment. Red Hat is fully responsible for maintaining the uptime of the cluster. The fully managed services are available on AWS and Azure. For AWS, the OpenShift service is called ROSA (Red Hat OpenShift Service on AWS). For Azure, the service is called Azure Red Hat OpenShift. In a self-managed deployment, you are responsible for instantiating and maintaining the OpenShift cluster. Red Hat provides the OpenShift-install utility to support the deployment of the OpenShift cluster in this case. The self-managed services are available globally to all CSPs. It is important that this compute instance is a GPU-accelerated compute instance and that the GPU type matches the list of supported GPUs from NVIDIA AI Enterprise. For example, T4, V100, and A100 are part of this list. You can choose one of the following methods to access the containerized GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time slicing when the entire GPU is not required. Additional resources Red Hat Openshift in the Cloud 2.2.6. GPUs and Red Hat Device Edge Red Hat Device Edge provides access to MicroShift. MicroShift provides the simplicity of a single-node deployment with the functionality and services you need for resource-constrained (edge) computing. Red Hat Device Edge meets the needs of bare-metal, virtual, containerized, or Kubernetes workloads deployed in resource-constrained environments. You can enable NVIDIA GPUs on containers in a Red Hat Device Edge environment. You use GPU passthrough to access the containerized GPUs. Additional resources How to accelerate workloads with NVIDIA GPUs on Red Hat Device Edge 2.3. GPU sharing methods Red Hat and NVIDIA have developed GPU concurrency and sharing mechanisms to simplify GPU-accelerated computing on an enterprise-level OpenShift Container Platform cluster. Applications typically have different compute requirements that can leave GPUs underutilized. Providing the right amount of compute resources for each workload is critical to reduce deployment cost and maximize GPU utilization. Concurrency mechanisms for improving GPU utilization exist that range from programming model APIs to system software and hardware partitioning, including virtualization. The following list shows the GPU concurrency mechanisms: Compute Unified Device Architecture (CUDA) streams Time-slicing CUDA Multi-Process Service (MPS) Multi-instance GPU (MIG) Virtualization with vGPU Consider the following GPU sharing suggestions when using the GPU concurrency mechanisms for different OpenShift Container Platform scenarios: Bare metal vGPU is not available. Consider using MIG-enabled cards. VMs vGPU is the best choice. Older NVIDIA cards with no MIG on bare metal Consider using time-slicing. VMs with multiple GPUs and you want passthrough and vGPU Consider using separate VMs. Bare metal with OpenShift Virtualization and multiple GPUs Consider using pass-through for hosted VMs and time-slicing for containers. Additional resources Improving GPU Utilization 2.3.1. CUDA streams Compute Unified Device Architecture (CUDA) is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs. A stream is a sequence of operations that executes in issue-order on the GPU. CUDA commands are typically executed sequentially in a default stream and a task does not start until a preceding task has completed. Asynchronous processing of operations across different streams allows for parallel execution of tasks. A task issued in one stream runs before, during, or after another task is issued into another stream. This allows the GPU to run multiple tasks simultaneously in no prescribed order, leading to improved performance. Additional resources Asynchronous Concurrent Execution 2.3.2. Time-slicing GPU time-slicing interleaves workloads scheduled on overloaded GPUs when you are running multiple CUDA applications. You can enable time-slicing of GPUs on Kubernetes by defining a set of replicas for a GPU, each of which can be independently distributed to a pod to run workloads on. Unlike multi-instance GPU (MIG), there is no memory or fault isolation between replicas, but for some workloads this is better than not sharing at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU. You can apply a cluster-wide default configuration for time-slicing. You can also apply node-specific configurations. For example, you can apply a time-slicing configuration only to nodes with Tesla T4 GPUs and not modify nodes with other GPU models. You can combine these two approaches by applying a cluster-wide default configuration and then labeling nodes to give those nodes a node-specific configuration. 2.3.3. CUDA Multi-Process Service CUDA Multi-Process Service (MPS) allows a single GPU to use multiple CUDA processes. The processes run in parallel on the GPU, eliminating saturation of the GPU compute resources. MPS also enables concurrent execution, or overlapping, of kernel operations and memory copying from different processes to enhance utilization. Additional resources CUDA MPS 2.3.4. Multi-instance GPU Using Multi-instance GPU (MIG), you can split GPU compute units and memory into multiple MIG instances. Each of these instances represents a standalone GPU device from a system perspective and can be connected to any application, container, or virtual machine running on the node. The software that uses the GPU treats each of these MIG instances as an individual GPU. MIG is useful when you have an application that does not require the full power of an entire GPU. The MIG feature of the new NVIDIA Ampere architecture enables you to split your hardware resources into multiple GPU instances, each of which is available to the operating system as an independent CUDA-enabled GPU. NVIDIA GPU Operator version 1.7.0 and higher provides MIG support for the A100 and A30 Ampere cards. These GPU instances are designed to support up to seven multiple independent CUDA applications so that they operate completely isolated with dedicated hardware resources. Additional resources NVIDIA Multi-Instance GPU User Guide 2.3.5. Virtualization with vGPU Virtual machines (VMs) can directly access a single physical GPU using NVIDIA vGPU. You can create virtual GPUs that can be shared by VMs across the enterprise and accessed by other devices. This capability combines the power of GPU performance with the management and security benefits provided by vGPU. Additional benefits provided by vGPU includes proactive management and monitoring for your VM environment, workload balancing for mixed VDI and compute workloads, and resource sharing across multiple VMs. Additional resources Virtual GPUs 2.4. NVIDIA GPU features for OpenShift Container Platform NVIDIA Container Toolkit NVIDIA Container Toolkit enables you to create and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to use NVIDIA GPUs. NVIDIA AI Enterprise NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software optimized, certified, and supported with NVIDIA-Certified systems. NVIDIA AI Enterprise includes support for Red Hat OpenShift Container Platform. The following installation methods are supported: OpenShift Container Platform on bare metal or VMware vSphere with GPU Passthrough. OpenShift Container Platform on VMware vSphere with NVIDIA vGPU. GPU Feature Discovery NVIDIA GPU Feature Discovery for Kubernetes is a software component that enables you to automatically generate labels for the GPUs available on a node. GPU Feature Discovery uses node feature discovery (NFD) to perform this labeling. The Node Feature Discovery Operator (NFD) manages the discovery of hardware features and configurations in an OpenShift Container Platform cluster by labeling nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, OS version, and so on. You can find the NFD Operator in the Operator Hub by searching for "Node Feature Discovery". NVIDIA GPU Operator with OpenShift Virtualization Up until this point, the GPU Operator only provisioned worker nodes to run GPU-accelerated containers. Now, the GPU Operator can also be used to provision worker nodes for running GPU-accelerated virtual machines (VMs). You can configure the GPU Operator to deploy different software components to worker nodes depending on which GPU workload is configured to run on those nodes. GPU Monitoring dashboard You can install a monitoring dashboard to display GPU usage information on the cluster Observe page in the OpenShift Container Platform web console. GPU utilization information includes the number of available GPUs, power consumption (in watts), temperature (in degrees Celsius), utilization (in percent), and other metrics for each GPU. Additional resources NVIDIA-Certified Systems NVIDIA AI Enterprise NVIDIA Container Toolkit Enabling the GPU Monitoring Dashboard MIG Support in OpenShift Container Platform Time-slicing NVIDIA GPUs in OpenShift Deploy GPU Operators in a disconnected or airgapped environment Node Feature Discovery Operator
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hardware_accelerators/nvidia-gpu-architecture
Chapter 2. Host Security
Chapter 2. Host Security 2.1. Why Host Security Matters When deploying virtualization technologies, you must ensure that the host physical machine and its operating system cannot be compromised. In this case the host is a Red Hat Enterprise Linux system that manages the system, devices, memory and networks as well as all guest virtual machines. If the host physical machine is insecure, all guest virtual machines in the system are vulnerable. There are several ways to enhance security on systems using virtualization. You or your organization should create a Deployment Plan . This plan needs to contain the following: Operating specifications Specifies which services are needed on your guest virtual machines Specifies the host physical servers as well as what support is required for these services Here are a few security issues to consider while developing a deployment plan: Run only necessary services on host physical machines. The fewer processes and services running on the host physical machine, the higher the level of security and performance. Enable SELinux on the hypervisor. Read Section 2.1.2, "SELinux and Virtualization" for more information on using SELinux and virtualization. Use a firewall to restrict traffic to the host physical machine. You can setup a firewall with default-reject rules that will help secure the host physical machine from attacks. It is also important to limit network-facing services. Do not allow normal users to access the host operating system. If the host operating system is privileged, granting access to unprivileged accounts may compromise the level of security. 2.1.1. Security Concerns when Adding Block Devices to a Guest When using host block devices, partitions, and logical volumes (LVMs) it is important to follow these guidelines: The host physical machine should not use filesystem labels to identify file systems in the fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if guest virtual machines have write access to whole partitions or LVM volumes, because a guest virtual machine could potentially write a filesystem label belonging to the host physical machine, to its own block device storage. Upon reboot of the host physical machine, the host physical machine could then mistakenly use the guest virtual machine's disk as a system disk, which would compromise the host physical machine system. It is preferable to use the UUID of a device to identify it in the fstab file, the initrd file or on the kernel command line. While using UUIDs is still not completely secure on certain file systems, a similar compromise with UUID is significantly less feasible. Guest virtual machines should not be given write access to whole disks or block devices (for example, /dev/sdb ). Guest virtual machines with access to whole block devices may be able to modify volume labels, which can be used to compromise the host physical machine system. Use partitions (for example, /dev/sdb1 ) or LVM volumes to prevent this problem. If you are using raw access to partitions, for example /dev/sdb1 or raw disks such as /dev/sdb, you should configure LVM to only scan disks that are safe, using the global_filter setting. Note When the guest virtual machine only has access to image files, these issues are not relevant. 2.1.2. SELinux and Virtualization Security Enhanced Linux was developed by the NSA with assistance from the Linux community to provide stronger security for Linux. SELinux limits an attacker's abilities and works to prevent many common security exploits such as buffer overflow attacks and privilege escalation. It is because of these benefits that all Red Hat Enterprise Linux systems should run with SELinux enabled and in enforcing mode. Procedure 2.1. Creating and mounting a logical volume on a guest virtual machine with SELinux enabled Create a logical volume. This example creates a 5 gigabyte logical volume named NewVolumeName on the volume group named volumegroup . This example also assumes that there is enough disk space. You may have to create additional storage on a network device and give the guest access to it. This information is discussed in more detail in the Red Hat Enterprise Linux Virtualization Administration Guide . Format the NewVolumeName logical volume with a file system that supports extended attributes, such as ext3. Create a new directory for mounting the new logical volume. This directory can be anywhere on your file system. It is advised not to put it in important system directories ( /etc , /var , /sys ) or in home directories ( /home or /root ). This example uses a directory called /virtstorage Mount the logical volume. Set the SELinux type for the folder you just created. If the targeted policy is used (targeted is the default policy) the command appends a line to the /etc/selinux/targeted/contexts/files/file_contexts.local file which makes the change persistent. The appended line may resemble this: Run the command to change the type of the mount point ( /virtstorage ) and all files under it to virt_image_t (the restorecon and setfiles commands read the files in /etc/selinux/targeted/contexts/files/ ). Note Create a new file (using the touch command) on the file system. Verify the file has been relabeled using the following command: The output shows that the new file has the correct attribute, virt_image_t . 2.1.3. SELinux This section contains topics to consider when using SELinux with your virtualization deployment. When you deploy system changes or add devices, you must update your SELinux policy accordingly. To configure an LVM volume for a guest virtual machine, you must modify the SELinux context for the respective underlying block device and volume group. Make sure that you have installed the policycoreutils-python package ( yum install policycoreutils-python ) before running the command. KVM and SELinux The following table shows the SELinux Booleans which affect KVM when launched by libvirt. KVM SELinux Booleans SELinux Boolean Description virt_use_comm Allow virt to use serial/parallel communication ports. virt_use_fusefs Allow virt to read fuse files. virt_use_nfs Allow virt to manage NFS files. virt_use_samba Allow virt to manage CIFS files. virt_use_sanlock Allow sanlock to manage virt lib files. virt_use_sysfs Allow virt to manage device configuration (PCI). virt_use_xserver Allow virtual machine to interact with the xserver. virt_use_usb Allow virt to use USB devices. 2.1.4. Virtualization Firewall Information Various ports are used for communication between guest virtual machines and corresponding management utilities. Note Any network service on a guest virtual machine must have the applicable ports open on the guest virtual machine to allow external access. If a network service on a guest virtual machine is firewalled it will be inaccessible. Always verify the guest virtual machine's network configuration first. ICMP requests must be accepted. ICMP packets are used for network testing. You cannot ping guest virtual machines if the ICMP packets are blocked. Port 22 should be open for SSH access and the initial installation. Ports 80 or 443 (depending on the security settings on the RHEV Manager) are used by the vdsm-reg service to communicate information about the host physical machine. Ports 5634 to 6166 are used for guest virtual machine console access with the SPICE protocol. Ports 49152 to 49216 are used for migrations with KVM. Migration may use any port in this range depending on the number of concurrent migrations occurring. Enabling IP forwarding ( net.ipv4.ip_forward = 1 ) is also required for shared bridges and the default bridge. Note that installing libvirt enables this variable so it will be enabled when the virtualization packages are installed unless it was manually disabled. Note Note that enabling IP forwarding is not required for physical bridge devices. When a guest virtual machine is connected through a physical bridge, traffic only operates at a level that does not require IP configuration such as IP forwarding.
[ "lvcreate -n NewVolumeName -L 5G volumegroup", "mke2fs -j /dev/volumegroup/NewVolumeName", "mkdir /virtstorage", "mount /dev/volumegroup/NewVolumeName /virtstorage", "semanage fcontext -a -t virt_image_t \"/virtstorage(/.*)?\"", "/virtstorage(/.*)? system_u:object_r:virt_image_t:s0", "restorecon -R -v /virtstorage", "touch /virtstorage/newfile", "sudo ls -Z /virtstorage -rw-------. root root system_u:object_r:virt_image_t:s0 newfile", "semanage fcontext -a -t virt_image_t -f -b /dev/sda2 restorecon /dev/sda2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/chap-Virtualization_Security_Guide-Host_Security
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/installation_guide/proc_providing-feedback-on-red-hat-documentation_default
12.7. Updating an attribute
12.7. Updating an attribute This section describes how to update an attribute using the command line and the web console. 12.7.1. Updating an Attribute Using the Command Line Use the dsconf utility to update an attribute entry. For example: For further details about object class definitions, see Section 12.1.2, "Object Classes" . 12.7.2. Updating an Attribute Using the Web Console To update an attribute using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select Schema Attributes . Click the Choose Action button to the attribute you want to edit. Select Edit Attribute . Update the parameters. Click Save .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com schema attributetypes replace dateofbirth --desc=\"Employee birthday\" --syntax=\"1.3.6.1.4.1.1466.115.121.1.15\" --single-value --x-origin=\"Example defined\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/updating_an_attribute
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create an OpenShift Data Foundation cluster on IBM Power . 2.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.4.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.5. Finding available storage devices Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform. Example output: In this example, for worker-0, the available local devices of 500G are sda , sdc , sde , sdg , sdi , sdk , sdm , sdo . Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.6. Creating OpenShift Data Foundation cluster on IBM Power Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power. Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation: To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace Click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for block PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . Confirm whether diskmaker-manager pods and Persistent Volumes are created. For Pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-local-storage from the Project drop-down list. Check if there are diskmaker-manager pods for each of the worker node that you used while creating LocalVolume CR. For Persistent Volumes Click Storage PersistentVolumes from the left pane of the OpenShift Web Console. Check the Persistent Volumes with the name local-pv-* . Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the required Storage Class that you used while installing LocalVolume. By default, it is set to none . Optional: Select Use Ceph RBD as the default StorageClass . This avoids having to manually annotate a StorageClass. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select Default (OVN) network as Multus is not yet supported on OpenShift Data Foundation on IBM Power. Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery(Regional-DR only) checkbox, else click . In the Review and create page:: Review the configurations details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'", "oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true", "oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.", "oc get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf", "oc debug node/<node name>", "oc debug node/worker-0 Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part", "get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}'", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Block", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_power/deploy-using-local-storage-devices-ibm-power
Chapter 6. Installing a three-node cluster on vSphere
Chapter 6. Installing a three-node cluster on vSphere In OpenShift Container Platform version 4.14, you can install a three-node cluster on VMware vSphere. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 6.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: Configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. In a three-node cluster, the Ingress Controller pods run on the control plane nodes. For more information, see the "Load balancing requirements for user-provisioned infrastructure". After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on vSphere with user-provisioned infrastructure". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 6.2. steps Installing a cluster on vSphere with customizations Installing a cluster on vSphere with user-provisioned infrastructure
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_vsphere/installing-vsphere-three-node
Chapter 3. Installing and configuring the Datadog agent for Ceph
Chapter 3. Installing and configuring the Datadog agent for Ceph Install the Datadog agent for Ceph and configure it to report back the Ceph data to the Datadog App. Prerequisites Root-level access to the Ceph monitor node. Appropriate Ceph key providing access to the Red Hat Ceph Storage cluster. Internet access. Procedure Log in to the Datadog App . The user interface will present navigation on the left side of the screen. Click Integrations . To install the agent from the command line, click on the Agent tab at the top of the screen. Open a command line and enter the one-step command line agent installation. Example Note Copy the example from the Datadog user interface, as the key differs from the example above and with each user account.
[ "DD_API_KEY= KEY-STRING bash -c \"USD(curl -L https://raw.githubusercontent.com/DataDog/dd-agent/master/packaging/datadog-agent/source/install_agent.sh)\"" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/monitoring_ceph_with_datadog_guide/installing-and-configuring-the-datadog-agent-for-ceph_datadog
6.4. Command Logging
6.4. Command Logging Command logging captures user commands that have been submitted to JBoss Data Virtualization, query plan commands when query planning is performed, and data source commands that are being executed by connectors. The user command, "START USER COMMAND", is logged when JBoss Data Virtualization starts working on the query for the first time. This does not include the time the query was waiting in the queue. A corresponding user command, "END USER COMMAND", is logged when the request is complete (that is, when the statement is closed or all the batches are retrieved). There is only one pair of these for every user query. The query plan command, "PLAN USER COMMAND", is logged when JBoss Data Virtualization finishes the query planning process. There is no corresponding ending log entry. Non-plan user events are logged at the INFO level. The data source command, "START DATA SRC COMMAND", is logged when a query is sent to the data source. And a corresponding data source command, "END SRC COMMAND", is logged when the execution is closed (that is, all the rows has been read). There can be one pair for each data source query that has been executed by JBoss Data Virtualization, and there can be number of pairs depending upon your query. With this information being captured, the overall query execution time can be calculated. Additionally, each source query execution time can be calculated. If the overall query execution time is showing a performance issue, then look at each data source execution time to see where the issue may be. With this information being captured, the overall query execution time in Teiid can be calculated. Additionally, each source query execution time can be calculated. If the overall query execution time is showing a performance issue, then look at each data source execution time to see where the issue maybe. To enable command logging to the default log location, simply enable the DEBUG level of logging for the org.teiid.COMMAND_LOG context. You can use the Admin Console to enable or disable it. Note that you can also turn on command logging using the web-console. To enable command logging to an alternative file location, configure a separate file appender for the DETAIL logging of the org.teiid.COMMAND_LOG context. An example of this is shown below and can also be found in the standalone.xml file. See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on developing a custom logging solution if file based (or any other built-in log4j) logging is not sufficient. The following is an example of a data source command and what one would look like when printed to the command log: Note the following pieces of information: modelName: this represents the physical model for the data source that the query is being issued. translatorName: shows type of translator used to communicate to the data source. principal: shows the user account who submitted the query startTime/endTime: the time of the action, which is based on the type command being executed. sql: is the command submitted to the translator for execution, which is NOT necessarily the final sql command submitted to the actual data source. But it does show what the query engine decided to push down.
[ "<periodic-rotating-file-handler name=\"COMMAND_FILE\"> <level name=\"DEBUG\" /> <formatter> <pattern-formatter pattern=\"%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n\" /> </formatter> <file relative-to=\"jboss.server.log.dir\" path=\"command.log\" /> <suffix value=\".yyyy-MM-dd\" /> </periodic-rotating-file-handler> <logger category=\"org.teiid.COMMAND_LOG\"> <level name=\"DEBUG\" /> <handlers> <handler name=\"COMMAND_FILE\" /> </handlers> </logger>", "2012-02-22 16:01:53,712 DEBUG [org.teiid.COMMAND_LOG] (Worker1_QueryProcessorQueue11 START DATA SRC COMMAND: startTime=2012-02-22 16:01:53.712 requestID=Ku4/dgtZPYk0.5 sourceCommandID=4 txID=null modelName=DTHCP translatorName=jdbc-simple sessionID=Ku4/dgtZPYk0 principal=user@teiid-security sql=HCP_ADDR_XREF.HUB_ADDR_ID, CPN_PROMO_HIST.PROMO_STAT_DT FROM CPN_PROMO_HIST, HCP_ADDRESS, HCP_ADDR_XREF WHERE (HCP_ADDRESS.ADDR_ID = CPN_PROMO_HIST.SENT_ADDR_ID) AND (HCP_ADDRESS.ADDR_ID = HCP_ADDR_XREF.ADDR_ID) AND (CPN_PROMO_HIST.PROMO_STAT_CD NOT LIKE 'EMAIL%') AND (CPN_PROMO_HIST.PROMO_STAT_CD <> 'SENT_EM') AND (CPN_PROMO_HIST.PROMO_STAT_DT > {ts'2010-02-22 16:01:52.928'})" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/command_logging1
Chapter 1. Preparing to install on {ibmpowerProductName} Virtual Server
Chapter 1. Preparing to install on {ibmpowerProductName} Virtual Server The installation workflows documented in this section are for IBM Power Virtual Server infrastructure environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Important IBM Power Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2. Requirements for installing OpenShift Container Platform on IBM Power Virtual Server Before installing OpenShift Container Platform on IBM Power Virtual Server, you must create a service account and configure an IBM Cloud account. See Configuring an IBM Cloud account for details about creating an account, configuring DNS and supported IBM Power Virtual Server regions. You must manually manage your cloud credentials when installing a cluster to IBM Power Virtual Server. Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. 1.3. Choosing a method to install OpenShift Container Platform on IBM Power Virtual Server You can install OpenShift Container Platform on IBM Power Virtual Server using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Power Virtual Server using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Power Virtual Server infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Power Virtual Server : You can install a customized cluster on IBM Power Virtual Server infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Power Virtual Server into an existing VPC : You can install OpenShift Container Platform on IBM Power Virtual Server into an existing Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on IBM Power Virtual Server : You can install a private cluster on IBM Power Virtual Server. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on IBM Power Virtual Server in a restricted network : You can install OpenShift Container Platform on IBM Power Virtual Server on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. 1.4. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on IBM Power Virtual Server, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys 1.5. steps Configuring an IBM Cloud account
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_power_virtual_server/preparing-to-install-on-ibm-power-vs
2. Kernel
2. Kernel The kernel shipped in Red Hat Enterprise Linux 6.1 includes several hundred bug fixes for and enhancements to the Linux kernel. For details concerning every bug fixed in and every enhancement added to the kernel for this release, refer to the kernel chapter in the Red Hat Enterprise Linux 6.1 Technical Notes. Control Groups Control groups are a feature of the Linux kernel introduced in Red Hat Enterprise Linux 6. Each control group is a set of tasks on a system that have been grouped together to better manage their interaction with system hardware. Control groups can be tracked to monitor the system resources that they use. Additionally, system administrators can use control group infrastructure to allow or to deny specific control groups access to system resources such as memory, CPUs (or groups of CPUs), networking, I/O, or the scheduler. Red Hat Enterprise Linux 6.1 introduces many improvements and updates to control groups, including the ability to throttle block device Input/Output (I/O) to a particular device, either by bytes per second or I/O Per Second (IOPS). Additionally, integration with libvirt and other userspace tools is provided by the new ability to create hierarchical block device control groups. The new block device control group tunable group_idle , provides better throughput with control groups while maintaining fairness. Red Hat Enterprise Linux 6.1 also introduces the new autogroup feature, reducing latencies and allowing for more interactive tasks during CPU intensive workloads. This cgsnapshot tool, providing the ability to take a snapshot of the current control group configuration. Note Control Groups and other resource management features are discussed in detail in the Red Hat Enterprise Linux 6 Resource Management Guide Networking updates Red Hat Enterprise Linux 6.1 introduces support for Receive Packet Steering (RPS) and Receive Flow Steering (RFS). Receive Packet Steering allows incoming network packets to be processed in parallel over multiple CPU cores. Receive Flow Steering chooses the optimal CPU to process network data intended for a specific application. kdump kdump is an advanced crash dumping mechanism. When enabled, the system is booted from the context of another kernel. This second kernel reserves a small amount of memory, and its only purpose is to capture the core dump image in case the system crashes. Red Hat Enterprise Linux 6.1 introduces the kernel message dumper, which is called when a kernel panic occurs. The kernel message dumper provides easier crash analysis and allows 3rd party kernel message logging to alternative targets. Performance updates and improvements The kernel in Red Hat Enterprise Linux 6.1 provides the following notable performance improvements: Updates and improvements to Transparent Huge Pages (THP) support Updates to perf_event , adding the new perf lock feature to better analyze lock events. kprobes jump optimization, reducing overhead and enhancing SystemTap performance. Updates to i7300_edac and i7core_edac , providing support for monitoring of memory errors on motherboards using Intel 7300 chipset
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/kernel
Chapter 2. Che-Theia IDE basics
Chapter 2. Che-Theia IDE basics This section describes basics workflows and commands for Che-Theia: the native integrated development environment for Red Hat CodeReady Workspaces. Defining custom commands for Che-Theia Version Control Troubleshooting 2.1. Defining custom commands for Che-Theia The Che-Theia IDE allows users to define custom commands in a devfile that are then available when working in a workspace. The following is an example of the commands section of a devfile. commands: - name: theia:build actions: - type: exec component: che-dev command: > yarn workdir: /projects/theia - name: run actions: - type: vscode-task referenceContent: | { "version": "2.0.0", "tasks": [ { "label": "theia:watch", "type": "shell", "options": {"cwd": "/projects/theia"}, "command": "yarn", "args": ["watch"] } ] } - name: debug actions: - type: vscode-launch referenceContent: | { "version": "0.2.0", "configurations": [ { "type": "node", "request": "attach", "name": "Attach by Process ID", "processId": "USD{command:PickProcess}" } ] } CodeReady Workspaces commands theia:build The exec type implies that the CodeReady Workspaces runner is used for command execution. The user can specify the component in whose container the command is executed. The command field contains the command line for execution. The workdir is the working directory in which the command is executed. Visual Studio Code (VS Code) tasks run The type is vscode-task . For this type of command, the referenceContent field must contain content with task configurations in the VS Code format. For more information about VS Code tasks, see the Task section on the Visual Studio User Guide page . VS Code launch configurations debug The type is vscode-launch . It contains the launch configurations in the VS Code format. For more information about VS Code launch configurations, see the Debugging section on the Visual Studio documentation page . For a list of available tasks and launch configurations, see the tasks.json and the launch.json configuration files in the /workspace/.theia directory where the configuration from the devfile is exported to. 2.1.1. Che-Theia task types Two types of tasks exist in a devfile: tasks in the VS Code format and CodeReady Workspaces commands. Tasks from the devfile are copied to the configuration file when the workspace is started. Depending on the type of the task, the task is then available for running: CodeReady Workspaces commands: From the Terminal Run Task menu in the configured tasks section, or from the My Workspace panel Tasks in the VS Code format: From the Run Tasks menu To run the task definitions provided by plug-ins, select the Terminal Run Task menu option. The tasks are placed in the detected tasks section. 2.1.2. Running and debugging Che-Theia supports the Debug Adapter Protocol . This protocol defines a generic way for how a development tool can communicate with a debugger. It means Che-Theia works with all implementations . Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . Procedure To debug an application: Click Debug Add Configuration to add debugging or launch configuration to the project. From the pop-up menu, select the appropriate configuration for the application that you want to debug. Update the configuration by modifying or adding attributes. Breakpoints can be toggled by clicking the editor margin. Open the context menu of the breakpoint to add conditions. To start debugging, click View Debug . In the Debug view, select the configuration and press F5 to debug the application. Or, start the application without debugging by pressing Ctrl+F5 . 2.1.3. Editing a task and launch configuration Procedure To customize the configuration file: Edit the tasks.json or launch.json configuration files. Add new definitions to the configuration file or modify the existing ones. Note The changes are stored in the configuration file. To customize the task configuration provided by plug-ins, select the Terminal Configure TasksS menu option, and choose the task to configure. The configuration is then copied to the tasks.json file and is available for editing. 2.2. Version Control Red Hat CodeReady Workspaces natively supports the VS Code SCM model . By default, Red Hat CodeReady Workspaces includes the native VS Code Git extension as a Source Code Management (SCM) provider. 2.2.1. Managing Git configuration: identity The first thing to do before starting to use Git is to set a user name and email address. This is important because every Git commit uses this information. Prerequisites The Visual Studio Code Git extension installed. Procedure To configure Git identity using the CodeReady Workspaces user interface, go to in Preferences . Open File > Settings > Open Preferences : In the opened window, navigate to the Git section, and find: And configure the identity. To configure Git identity using the command line, open the terminal of the Che-Theia container. Navigate to the My Workspace view, and open Plugins > theia-ide... > New terminal : Execute the following commands: Che-Theia permanently stores this information and restores it on future workspace starts. 2.2.2. Accessing a Git repository using HTTPS Prerequisites Git is installed. Install Git if needed by following Getting Started - Installing Git . Procedure To clone a repository using HTTPS: Use the clone command provided by the Visual Studio Code Git extension. Alternatively, use the native Git commands in the terminal to clone a project. Navigate to destination folder using the cd command. Use git clone to clone a repository: Red Hat CodeReady Workspaces supports git self-signed SSL certificates. See Deploying Red Hat CodeReady Workspaces with support for git repositories with self-signed certificates to learn more. 2.2.3. Accessing a Git repository using a generated SSH key pair 2.2.3.1. Generating an SSH key using the CodeReady Workspaces command palette The following section describes a generation of an SSH key using the CodeReady Workspaces command palette and its further use in Git provider communication. This SSH key restricts permissions for the specific Git provider; therefore, the user has to create a unique SSH key for each Git provider in use. Prerequisites A running instance of CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of CodeReady Workspaces Creating a workspace from user dashboard . Personal GitHub account or other Git provider account created. Procedure A common SSH key pair that works with all the Git providers is present by default. To start using it, add the public key to the Git provider. Generate an SSH key pair that only works with a particular Git provider: In the CodeReady Workspaces IDE, press F1 to open the Command Palette, or navigate to View Find Command in the top menu. The command palette can be also activated by pressing Ctrl + Shift + p (or Cmd + Shift + p on macOS). Search for SSH: generate key pair for particular host by entering generate into the search box and pressing Enter once filled. Provide the hostname for the SSH key pair such as, for example, github.com . The SSH key pair is generated. Click the View button and copy the public key from the editor and add it to the Git provider. Because of this action, the user can now use another command from the command palette: Clone git repository by providing an SSH secured URL. 2.2.3.2. Adding the associated public key to a repository or account on GitHub To add the associated public key to a repository or account on GitHub: Navigate to github.com . Click the drop-down arrow to the user icon in the upper right corner of the window. Click Settings SSH and GPG keys and then click the New SSH key button. In the Title field, type a title for the key, and in the Key field, paste the public key copied from CodeReady Workspaces. Click the Add SSH key button. 2.2.3.3. Adding the associated public key to a Git repository or account on GitLab To add the associated public key to a Git repository or account on GitLab: Navigate to gitlab.com . Click the user icon in the upper right corner of the window. Click Settings SSH Keys . In the Title field, type a title for the key and in the Key field, paste the public key copied from CodeReady Workspaces. Click the Add key button. 2.2.4. Managing pull requests using the GitHub PR plug-in To manage GitHub pull requests, the VS Code GitHub Pull Request plug-in is available in the list of plug-ins of the workspace. 2.2.4.1. Using the GitHub Pull Requests plug-in Prerequisites GitHub OAuth is configured. See Configuring GitHub OAuth . Procedure Authenticate by running the GitHub authenticate command. You will be redirected to GitHub to authorize CodeReady Workspaces. When CodeReady Workspaces is authorized, refresh the browser page where CodeReady Workspaces is running to update the plug-in with the GitHub token. Alternatively, manually fetch the GitHub token and paste it to the plug-in by running the GitHub Pull Requests: Manually Provide Authentication Response command. 2.2.4.2. Creating a new pull request Open the GitHub repository. To be able to execute remote operations, the repository must have a remote with an SSH URL. Checkout a new branch and make changes that you want to publish. Run the GitHub Pull Requests: Create Pull Request command. 2.3. Che-Theia Troubleshooting This section describes some of the most frequent issues with the Che-Theia IDE. Che-Theia shows a notification with the following message: Plugin runtime crashed unexpectedly, all plugins are not working, please reload the page. Probably there is not enough memory for the plugins. This means that one of the Che-Theia plug-ins that are running in the Che-Theia IDE container requires more memory than the container has. To fix this problem, increase the amount of memory for the Che-Theia IDE container: Navigate to the CodeReady Workspaces Dashboard. Select the workspace in which the problem happened. Switch to the Devfile tab. In the components section of the devfile, find a component of the cheEditor type. Add a new property, memoryLimit: 1024M (or increase the value if it already exists). Save changes and restart the workspace. Additional resources Asking the community for help: Mattermost channel dedicated to Red Hat CodeReady Workspaces. Reporting a bug: Red Hat CodeReady Workspaces repository issues .
[ "commands: - name: theia:build actions: - type: exec component: che-dev command: > yarn workdir: /projects/theia - name: run actions: - type: vscode-task referenceContent: | { \"version\": \"2.0.0\", \"tasks\": [ { \"label\": \"theia:watch\", \"type\": \"shell\", \"options\": {\"cwd\": \"/projects/theia\"}, \"command\": \"yarn\", \"args\": [\"watch\"] } ] } - name: debug actions: - type: vscode-launch referenceContent: | { \"version\": \"0.2.0\", \"configurations\": [ { \"type\": \"node\", \"request\": \"attach\", \"name\": \"Attach by Process ID\", \"processId\": \"USD{command:PickProcess}\" } ] }", "user.name user.email", "git config --global user.name \"John Doe\" git config --global user.email [email protected]", "git clone <link>" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/end-user_guide/che-theia-ide-basics_crw
Chapter 3. Installing the undercloud with containers
Chapter 3. Installing the undercloud with containers This chapter provides info on how to create a container-based undercloud and keep it updated. 3.1. Configuring director The director installation process requires certain settings in the undercloud.conf configuration file, which director reads from the home directory of the stack user. Complete the following steps to copy default template as a foundation for your configuration. Procedure Copy the default template to the home directory of the stack user's: Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value. 3.2. Director configuration parameters The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors. Important At minimum, you must set the container_images_file parameter to the environment file that contains your container image configuration. Without this parameter properly set to the appropriate file, director cannot obtain your container image rule set from the ContainerImagePrepare parameter nor your container registry authentication details from the ContainerImageRegistryCredentials parameter. Defaults The following parameters are defined in the [DEFAULT] section of the undercloud.conf file: additional_architectures A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports ppc64le architecture in addition to the default x86_64 architecture. Note When you enable support for ppc64le, you must also set ipxe_enabled to False . For more information on configuring your undercloud with multiple CPU architectures, see Configuring a multiple CPU architecture overcloud . certificate_generation_ca The certmonger nickname of the CA that signs the requested certificate. Use this option only if you have set the generate_service_certificate parameter. If you select the local CA, certmonger extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the certificate to the trust chain. clean_nodes Defines whether to wipe the hard drive between deployments and after introspection. cleanup Delete temporary files. Set this to False to retain the temporary files used during deployment. The temporary files can help you debug the deployment if errors occur. container_cli The CLI tool for container management. Leave this parameter set to podman . Red Hat Enterprise Linux 8.4 only supports podman . container_healthcheck_disabled Disables containerized service health checks. Red Hat recommends that you enable health checks and leave this option set to false . container_images_file Heat environment file with container image information. This file can contain the following entries: Parameters for all required container images The ContainerImagePrepare parameter to drive the required image preparation. Usually the file that contains this parameter is named containers-prepare-parameter.yaml . container_insecure_registries A list of insecure registries for podman to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, podman has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite Server if the undercloud is registered to Satellite. container_registry_mirror An optional registry-mirror configured that podman uses. custom_env_files Additional environment files that you want to add to the undercloud installation. deployment_user The user who installs the undercloud. Leave this parameter unset to use the current default user stack . discovery_default_driver Sets the default driver for automatically enrolled nodes. Requires the enable_node_discovery parameter to be enabled and you must include the driver in the enabled_hardware_types list. enable_ironic; enable_ironic_inspector; enable_mistral; enable_nova; enable_tempest; enable_validations; enable_zaqar Defines the core services that you want to enable for director. Leave these parameters set to true . enable_node_discovery Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the fake driver as a default but you can set discovery_default_driver to override. You can also use introspection rules to specify driver information for newly enrolled nodes. enable_novajoin Defines whether to install the novajoin metadata service in the undercloud. enable_routed_networks Defines whether to enable support for routed control plane networks. enable_swift_encryption Defines whether to enable Swift encryption at-rest. enable_telemetry Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud. Set the enable_telemetry parameter to true if you want to install and configure telemetry services automatically. The default value is false , which disables telemetry on the undercloud. This parameter is required if you use other products that consume metrics data, such as Red Hat CloudForms. Warning RBAC is not supported by every component. The Alarming service (aodh) and Gnocchi do not take secure RBAC rules into account. enabled_hardware_types A list of hardware types that you want to enable for the undercloud. generate_service_certificate Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the undercloud_service_certificate parameter. The undercloud installation saves the resulting certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem . The CA defined in the certificate_generation_ca parameter signs this certificate. heat_container_image URL for the heat container image to use. Leave unset. heat_native Run host-based undercloud configuration using heat-all . Leave as true . hieradata_override Path to hieradata override file that configures Puppet hieradata on the director, providing custom configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. For more information about using this feature, see Configuring hieradata on the undercloud . inspection_extras Defines whether to enable extra hardware collection during the inspection process. This parameter requires the python-hardware or python-hardware-detect packages on the introspection image. inspection_interface The bridge that director uses for node introspection. This is a custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane . inspection_runbench Runs a set of benchmarks during node introspection. Set this parameter to true to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. ipa_otp Defines the one-time password to register the undercloud node to an IPA server. This is required when enable_novajoin is enabled. ipv6_address_mode IPv6 address configuration mode for the undercloud provisioning network. The following list contains the possible values for this parameter: dhcpv6-stateless - Address configuration using router advertisement (RA) and optional information using DHCPv6. dhcpv6-stateful - Address configuration and optional information using DHCPv6. ipxe_enabled Defines whether to use iPXE or standard PXE. The default is true , which enables iPXE. Set this parameter to false to use standard PXE. For PowerPC deployments, or for hybrid PowerPC and x86 deployments, set this value to false . local_interface The chosen interface for the director Provisioning NIC. This is also the device that director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the ip addr command. For example, this is the result of an ip addr command: In this example, the External NIC uses em0 and the Provisioning NIC uses em1 , which is currently not configured. In this case, set the local_interface to em1 . The configuration script attaches this interface to a custom bridge defined with the inspection_interface parameter. local_ip The IP address defined for the director Provisioning NIC. This is also the IP address that director uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you use a different subnet for the Provisioning network, for example, if this IP address conflicts with an existing IP address or subnet in your environment. For IPv6, the local IP address prefix length must be /64 to support both stateful and stateless connections. local_mtu The maximum transmission unit (MTU) that you want to use for the local_interface . Do not exceed 1500 for the undercloud. local_subnet The local subnet that you want to use for PXE boot and DHCP interfaces. The local_ip address should reside in this subnet. The default is ctlplane-subnet . net_config_override Path to network configuration override template. If you set this parameter, the undercloud uses a JSON or YAML format template to configure the networking with os-net-config and ignores the network parameters set in undercloud.conf . Use this parameter when you want to configure bonding or add an option to the interface. For more information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . networks_file Networks file to override for heat . output_dir Directory to output state, processed heat templates, and Ansible deployment files. overcloud_domain_name The DNS domain name that you want to use when you deploy the overcloud. Note When you configure the overcloud, you must set the CloudDomain parameter to a matching value. Set this parameter in an environment file when you configure your overcloud. roles_file The roles file that you want to use to override the default roles file for undercloud installation. It is highly recommended to leave this parameter unset so that the director installation uses the default roles file. scheduler_max_attempts The maximum number of times that the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid potential race conditions when scheduling. service_principal The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA. subnets List of routed network subnets for provisioning and introspection. The default value includes only the ctlplane-subnet subnet. For more information, see Subnets . templates Heat templates file to override. undercloud_admin_host The IP address or hostname defined for director Admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_admin_host is not in the same IP network as the local_ip , you must set the ControlVirtualInterface parameter to the interface on which you want the admin APIs on the undercloud to listen. By default, the admin APIs listen on the br-ctlplane interface. Set the ControlVirtualInterface parameter in a custom environment file, and include the custom environment file in the undercloud.conf file by configuring the custom_env_files parameter. For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_debug Sets the log level of undercloud services to DEBUG . Set this value to true to enable DEBUG log level. undercloud_enable_selinux Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to true unless you are debugging an issue. undercloud_hostname Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but you must configure all system host name settings appropriately. undercloud_log_file The path to a log file to store the undercloud install and upgrade logs. By default, the log file is install-undercloud.log in the home directory. For example, /home/stack/install-undercloud.log . undercloud_nameservers A list of DNS nameservers to use for the undercloud hostname resolution. undercloud_ntp_servers A list of network time protocol servers to help synchronize the undercloud date and time. undercloud_public_host The IP address or hostname defined for director Public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_public_host is not in the same IP network as the local_ip , you must set the PublicVirtualInterface parameter to the public-facing interface on which you want the public APIs on the undercloud to listen. By default, the public APIs listen on the br-ctlplane interface. Set the PublicVirtualInterface parameter in a custom environment file, and include the custom environment file in the undercloud.conf file by configuring the custom_env_files parameter. For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_service_certificate The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate. undercloud_timezone Host timezone for the undercloud. If you do not specify a timezone, director uses the existing timezone configuration. undercloud_update_packages Defines whether to update packages during the undercloud installation. Subnets Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet , use the following sample in your undercloud.conf file: You can specify as many provisioning networks as necessary to suit your environment. Important Director cannot change the IP addresses for a subnet after director creates the subnet. cidr The network that director uses to manage overcloud instances. This is the Provisioning network, which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you use a different subnet for the Provisioning network. masquerade Defines whether to masquerade the network defined in the cidr for external access. This provides the Provisioning network with network address translation (NAT) so that the Provisioning network has external access through director. Note The director configuration also enables IP forwarding automatically using the relevant sysctl kernel parameter. dhcp_start; dhcp_end The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains enough IP addresses to allocate to your nodes. If not specified for the subnet, director determines the allocation pools by removing the values set for the local_ip , gateway , undercloud_admin_host , undercloud_public_host , and inspection_iprange parameters from the subnets full IP range. You can configure non-contiguous allocation pools for undercloud control plane subnets by specifying a list of start and end address pairs. Alternatively, you can use the dhcp_exclude option to exclude IP addresses within an IP address range. For example, the following configurations both create allocation pools 172.20.0.100-172.20.0.150 and 172.20.0.200-172.20.0.250 : Option 1 Option 2 dhcp_exclude IP addresses to exclude in the DHCP allocation range. For example, the following configuration excludes the IP address 172.20.0.105 and the IP address range 172.20.0.210-172.20.0.219 : dns_nameservers DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet uses nameservers defined in the undercloud_nameservers parameter. gateway The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default 192.168.24.1 unless you use a different IP address for director or want to use an external gateway directly. host_routes Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also configures the host routes for the local_subnet on the undercloud. inspection_iprange Temporary IP range for nodes on this network to use during the inspection process. This range must not overlap with the range defined by dhcp_start and dhcp_end but must be in the same IP subnet. Modify the values for these parameters to suit your configuration. When complete, save the file. 3.3. Installing director Complete the following steps to install director and perform some basic post-installation tasks. Procedure Run the following command to install director on the undercloud: This command launches the director configuration script. Director installs additional packages, configures its services according to the configuration in the undercloud.conf , and starts all the RHOSP service containers. This script takes several minutes to complete. The script generates two files: undercloud-passwords.conf - A list of all passwords for the director services. stackrc - A set of initialization variables to help you access the director command line tools. Confirm that the RHOSP service containers are running: The following command output indicates that the RHOSP service containers are running ( Up ): To initialize the stack user to use the command line tools, run the following command: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud; The director installation is complete. You can now use the director command line tools. 3.4. Performing a minor update of a containerized undercloud Director provides commands to update the main packages on the undercloud node. Use director to perform a minor update within the current version of your RHOSP environment. Procedure On the undercloud node, log in as the stack user. Source the stackrc file: Update the director main packages with the dnf update command: USD sudo dnf update -y python3-tripleoclient* tripleo-ansible ansible Update the undercloud environment with the openstack undercloud upgrade command : Wait until the undercloud update process completes. Reboot the undercloud to update the operating system's kernel and other system packages: Wait until the node boots.
[ "[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf", "2: em0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic em0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff", "[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true", "dhcp_start = 172.20.0.100,172.20.0.200 dhcp_end = 172.20.0.150,172.20.0.250", "dhcp_start = 172.20.0.100 dhcp_end = 172.20.0.250 dhcp_exclude = 172.20.0.151-172.20.0.199", "dhcp_exclude = 172.20.0.105,172.20.0.210-172.20.0.219", "[stack@director ~]USD openstack undercloud install", "[stack@director ~]USD sudo podman ps -a --format \"{{.Names}} {{.Status}}\"", "memcached Up 3 hours (healthy) haproxy Up 3 hours rabbitmq Up 3 hours (healthy) mysql Up 3 hours (healthy) iscsid Up 3 hours (healthy) keystone Up 3 hours (healthy) keystone_cron Up 3 hours (healthy) neutron_api Up 3 hours (healthy) logrotate_crond Up 3 hours (healthy) neutron_dhcp Up 3 hours (healthy) neutron_l3_agent Up 3 hours (healthy) neutron_ovs_agent Up 3 hours (healthy) ironic_api Up 3 hours (healthy) ironic_conductor Up 3 hours (healthy) ironic_neutron_agent Up 3 hours (healthy) ironic_pxe_tftp Up 3 hours (healthy) ironic_pxe_http Up 3 hours (unhealthy) ironic_inspector Up 3 hours (healthy) ironic_inspector_dnsmasq Up 3 hours (healthy) neutron-dnsmasq-qdhcp-30d628e6-45e6-499d-8003-28c0bc066487 Up 3 hours", "[stack@director ~]USD source ~/stackrc", "(undercloud) [stack@director ~]USD", "source ~/stackrc", "sudo dnf update -y python3-tripleoclient* tripleo-ansible ansible", "openstack undercloud upgrade", "sudo reboot" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/transitioning_to_containerized_services/assembly_installing-the-undercloud-with-containers
Chapter 25. Clustering
Chapter 25. Clustering Pacemaker correctly implements fencing and unfencing for Pacemaker remote nodes Previously, Pacemaker did not implement unfencing for Pacemaker remote nodes. As a consequence, Pacemaker remote nodes remained fenced even if a fence device required unfencing. With this update, Pacemaker correctly implements both fencing and unfencing for Pacemaker remote nodes, and the described problem no longer occurs. (BZ# 1394418 ) Pacemaker now probes guest nodes Important update for users of guest nodes. Pacemaker now probes guest nodes, which are Pacemaker remote nodes created using the remote-node parameter of a resource such as VirtualDomain . If users were previously relying on the fact that probes were not done, the probes may fail, potentially causing fencing of the guest node. If a guest node cannot run a probe of a resource (for example, if the software is not even installed on the guest), then the location constraint banning the resource from the guest node should have the resource-discovery option set to never , the same as would be required with a cluster node or remote node in the same situation. (BZ# 1489728 ) The pcs resource cleanup command no longer generates unnecessary cluster load The pcs resource cleanup command cleans the records of failed resource operations that have been resolved. Previously, the command probed all resources on all nodes, generating an unnecessary load on cluster operation. With this fix, the command probes only the resources for which a resource operation failed. The functionality of the pcs resource cleanup command has been replaced by the new pcs resource refresh command, which probes all resources on all nodes. For information on cluster resource cleanup, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-resource_cleanup-haar . (BZ# 1508351 ) Warning generated when user specifies action attribute for stonith device Previously, it was possible for a user to set an action attribute for a stonith device, even though this option is deprecated and is not recommended as it can cause unexpected fencing. The following fixes have been implemented: When a user tries to set an action option of a stonith device with the CLI, this generates a warning message along with the instructions to use the --force flag to set this attribute. The pcsd Web UI now displays a warning message to action option field. The output of the pcs status command displays a warning when a stonith device has the action option set. (BZ#1421702) It is now possible to enable stonith agent debugging without specifying the --force flag Previously, attempting to enable debugging of a stonith agent by setting the debug or verbose parameters required that the user specify the --force flag. With this fix, using the --force flag is no longer necessary. (BZ# 1432283 ) The fence_ilo3 resource agent no longer has a default value of cycle for the action parameter Previously, the fence_ilo3 resource agent had a default value of cycle for the action parameter. This value is unsupported, as it may cause data corruption. The default value for this parameter is now onoff . Additionally, a warning is now displayed in the output of the pcs status command and the web UI if a stonith device has its method option set to cycle . (BZ# 1519370 , BZ#1523378) Pacemaker no longer starts up when sbd is enabled but not started successfully by systemd Previously, if sbd did not start properly, systemd would still start Pacemaker. This would lead to sbd poison pill triggered reboots not being performed without this being detected by fence_sbd and, in the case of quorum-based watchdog fencing, the nodes losing quorum would not self-fence either. With this fix, if sbd does not come up properly Pacemaker is not started. This should prevent all sources of data curruption due to sbd not coming up. (BZ# 1525981 ) A fenced node in an 'sbd' setup now shuts down reliably Previously, when a node received an 'off' via the poison pill mechanism used by 'sbd' on a shared disk, the node would be likely to reboot instead of powering off. With this fix, receiving an 'off' will power off the node. Receiving a 'reset' will reboot the node. If the node is not able to perform the software-driven reboot or power off properly, the watchdog is going to trigger and the action performed is what the watchdog device is configured to. A fenced node in an 'sbd' setup now shuts down reliably if the watchdog device is configured to power off the node, and fencing is requesting 'off' via the poison pill mechanism on a shared disk. (BZ# 1468580 ) IPaddr2 resource agent now finds NIC for IPv6 addresses with 128 netmask Previously, the IPaddr2 resource agent failed to find the NIC for IPv6 addresses with 128 netmask. This fix corrects that issue. (BZ# 1445628 ) portblock agent no longer yields excessive unnecessary messages Previously, the portblock agent would flood the /var/log/messages file with monitoring messages that provided no useful information. With this fix, the /var/log/messages file contains more limited logging output from the portblock agent. (BZ# 1457382 ) /var/run/resource-agents directory now persists across reboots Previously, the /var/run/resource-agents directory, created at installation of the resource-agents package, was not persistent across reboots. With this fix, the directory is now present after a reboot. (BZ# 1462802 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/bug_fixes_clustering
5.5. Planning for Network and Physical Security
5.5. Planning for Network and Physical Security When deploying any Certificate System subsystem, the physical and network security of the subsystem instance has to be considered because of the sensitivity of the data generated and stored by the subsystems. 5.5.1. Considering Firewalls There are two considerations about using firewalls with Certificate System subsystems: Protecting sensitive subsystems from unauthorized access Allowing appropriate access to other subsystems and clients outside of the firewall The CA, KRA, and TKS are always placed inside a firewall because they contain critical information that can cause devastating security consequences if they are compromised. The TPS and OCSP can be placed outside the firewall. Likewise, other services and clients used by the Certificate System can be on a different machine outside the firewall. In that case, the local networks have to be configured to allow access between the subsystems behind the firewall and the services outside it. The LDAP database can be on a different server, even on a different network, than the subsystem which uses it. In this case, all LDAP ports ( 389 for LDAP and 636 for LDAPS, by default) need to be open in the firewall to allow traffic to the directory service. Without access to the LDAP database, all subsystem operations can fail. As part of configuring the firewalls, if iptables is enabled, then it must have configured policies to allow communication over the appropriate Certificate System ports. Configuring iptables is described in the Using and configuring firewalld guide . 5.5.2. Considering Physical Security and Location Because of the sensitive data they hold, consider keeping the CA, KRA, and TKS in a secure facility with adequate physical access restrictions. Just as network access to the systems needs to be restricted, the physical access also needs to be restricted. Along with finding a secure location, consider the proximity to the subsystem agents and administrators. Key recovery, for example, can require multiple agents to give approval; if these agents are spread out over a large geographical area, then the time differences may negatively impact the ability to retrieve keys. Plan the physical locations of the subsystems, then according to the locations of the agents and administrators who will manage the subsystem. 5.5.3. Planning Ports Each Certificate System server uses up to four ports: A non-secure HTTP port for end entity services that do not require authentication A secure HTTP port for end entity services, agent services, administrative console, and admin services that require authentication A Tomcat Server Management port A Tomcat AJP Connector Port All of the service pages and interfaces described in the Red Hat Certificate System User Interfaces chapter in the Red Hat Certificate System Administration Guide are connected to using the instance's URL and the corresponding port number. For example: To access the admin console, the URL specifies the admin port: All agent and admin functions require SSL/TLS client authentication. For requests from end entities, the Certificate System listens on both the SSL/TLS (encrypted) port and non-SSL/TLS ports. The ports are defined in the server.xml file. If a port is not used, it can be disabled in that file. For example: Whenever a new instance in installed, make sure that the new ports are unique on the host system. To verify that a port is available for use, check the appropriate file for the operating system. Port numbers for network-accessible services are usually maintained in a file named services . On Red Hat Enterprise Linux, it is also helpful to confirm that a port is not assigned by SELinux, by running the command semanage port -l to list all ports which currently have an SELinux context. When a new subsystem instance is created, any number between 1 and 65535 can be specified as the secure port number.
[ "https://server.example.com:8443/ca/ee/ca", "pkiconsole https://server.example.com:8443/ca", "<Service name=\"Catalina\"> <!--Connector port=\"8080\" ... /--> unused standard port <Connector port=\"8443\" ... />" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/network-physical
Chapter 12. Configuring and setting up remote jobs
Chapter 12. Configuring and setting up remote jobs Red Hat Satellite supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously. 12.1. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts remotely from Capsules using shell scripts or Ansible tasks and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Appendix B, Template writing reference . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . 12.2. Remote execution workflow For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your Capsule Server. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the remote execution feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job using the Capsule the host is registered to. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 12.3. Permissions for remote execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in Administering Red Hat Satellite . The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 12.4. Transport modes for remote execution You can configure your Satellite to use two different modes of transport for remote job execution. You can configure single Capsule to use either one mode or the other but not both. Push-based transport On Capsules in ssh mode, remote execution uses the SSH service to transport job details. This is the default transport mode. The SSH service must be enabled and active on the target hosts. The remote execution Capsule must have access to the SSH port on the target hosts. Unless you have a different setting, the standard SSH port is 22. This transport mode supports both Script and Ansible providers. Pull-based transport On Capsules in pull-mqtt mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to initiate the job execution it receives from Satellite Server. The host subscribes to the MQTT broker on Capsule for job notifications using the yggdrasil pull client. After the host receives a notification from the MQTT broker, it pulls job details from Capsule over HTTPS, runs the job, and reports results back to Capsule. This transport mode supports the Script provider only. To use the pull-mqtt mode, you must enable it on Capsule Server and configure the pull client on hosts. Note If your Capsule already uses the pull-mqtt mode and you want to switch back to the ssh mode, run this satellite-installer command: Additional resources To enable pull mode on Capsule Server, see Configuring pull-based transport for remote execution in Installing Capsule Server . To enable pull mode on a registered host, continue with Section 12.5, "Configuring a host to use the pull client" . To enable pull mode on a new host, continue with the following: Section 2.1, "Creating a host in Red Hat Satellite" Section 4.3, "Registering hosts by using global registration" 12.5. Configuring a host to use the pull client For Capsules configured to use pull-mqtt mode, hosts can subscribe to remote jobs using the remote execution pull client. Hosts do not require an SSH connection from their Capsule Server. Prerequisites You have registered the host to Satellite. The Capsule through which the host is registered is configured to use pull-mqtt mode. For more information, see Configuring pull-based transport for remote execution in Installing Capsule Server . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . The host can communicate with its Capsule over MQTT using port 1883 . The host can communicate with its Capsule over HTTPS. Procedure Install the katello-pull-transport-migrate package on your host: On Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 hosts: On Red Hat Enterprise Linux 7 hosts: The package installs foreman_ygg_worker and yggdrasil as dependencies, configures the yggdrasil client, and starts the pull client worker on the host. Verification Check the status of the yggdrasild service: 12.6. Creating a job template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing hosts . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to Satellite after a job finishes. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing hosts . CLI procedure To create a job template using a template-definition file, enter the following command: 12.7. Importing an Ansible playbook by name You can import Ansible playbooks by name to Satellite from collections installed on Capsule. Satellite creates a job template from the imported playbook and places the template in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Fetch the available Ansible playbooks by using the following API request: Select the Ansible playbook you want to import and note its name. Import the Ansible playbook by its name: You get a notification in the Satellite web UI after the import completes. steps You can run the playbook by executing a remote job from the created job template. For more information, see Section 12.22, "Executing a remote job" . 12.8. Importing all available Ansible playbooks You can import all the available Ansible playbooks to Satellite from collections installed on Capsule. Satellite creates job templates from the imported playbooks and places the templates in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Import the Ansible playbooks by using the following API request: You get a notification in the Satellite web UI after the import completes. steps You can run the playbooks by executing a remote job from the created job templates. For more information, see Section 12.22, "Executing a remote job" . 12.9. Configuring the fallback to any Capsule remote execution setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. To set the value to true , enter the following command: 12.10. Configuring the global Capsule remote execution setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. To set the value to true , enter the following command: 12.11. Setting an alternative directory for remote execution jobs in push mode By default, Satellite uses the /var/tmp directory on hosts for remote execution jobs in push mode. If the /var/tmp directory on your host is mounted with the noexec flag, Satellite cannot execute remote execution job scripts in this directory. You can use satellite-installer to set an alternative directory for executing remote execution jobs in push mode. Procedure On your host, create a new directory: Copy the SELinux context from the default /var/tmp directory: Configure your Satellite Server or Capsule Server to use the new directory: 12.12. Setting an alternative directory for remote execution jobs in pull mode By default, Satellite uses the /run directory on hosts for remote execution jobs in pull mode. If the /run directory on your host is mounted with the noexec flag, Satellite cannot execute remote execution job scripts in this directory. You can use the yggdrasild service to set an alternative directory for executing remote execution jobs in pull mode. Procedure On your host, perform these steps: Create a new directory: Access the yggdrasild service configuration: Specify the alternative directory by adding the following line to the configuration: Restart the yggdrasild service: 12.13. Altering the privilege elevation method By default, push-based remote execution uses sudo to switch from the SSH user to the effective user that executes the script on your host. In some situations, you might require to use another method, such as su or dzdo . You can globally configure an alternative method in your Satellite settings. Prerequisites Your user account has a role assigned that grants the view_settings and edit_settings permissions. If you want to use dzdo for Ansible jobs, ensure the community.general Ansible collection, which contains the required dzdo become plugin, is installed. For more information, see Installing collections in Ansible documentation . Procedure Navigate to Administer > Settings . Select the Remote Execution tab. Click the value of the Effective User Method setting. Select the new value. Click Submit . 12.14. Distributing SSH keys for remote execution For Capsules in ssh mode, remote execution connections are authenticated using SSH. The public SSH key from Capsule must be distributed to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 12.15, "Distributing SSH keys for remote execution manually" . Section 12.17, "Using the Satellite API to obtain SSH keys for remote execution" . Section 12.18, "Configuring a Kickstart template to distribute SSH keys during provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template in Managing hosts . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts . 12.15. Distributing SSH keys for remote execution manually To distribute SSH keys manually, complete the following steps: Procedure Copy the SSH pub key from your Capsule to your target host: Repeat this step for each target host you want to manage. Verification To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 12.16. Adding a passphrase to SSH key used for remote execution By default, Capsule uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure. Procedure On your Satellite Server or Capsule Server, use ssh-keygen to add a passphrase to your SSH key: steps Users now must use a passphrase when running remote execution jobs on hosts. 12.17. Using the Satellite API to obtain SSH keys for remote execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 12.18. Configuring a Kickstart template to distribute SSH keys during provisioning You can add a remote_execution_ssh_keys snippet to your custom Kickstart template to deploy SSH keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 12.19. Configuring a keytab for Kerberos ticket granting tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 12.20. Configuring Kerberos authentication for remote execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. Verification To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing hosts . 12.21. Setting up job templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job in Managing hosts . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing hosts . Ansible considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in Satellite. For more information, see Synchronizing Repository Templates in Managing hosts . Parameter variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. 12.22. Executing a remote job You can execute a job that is based on a job template against one or more hosts. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select the Job category and the Job template you want to use, then click . Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. Note If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query: Replace My_Host_Group with the name of the top-level host group. If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 12.23, "Advanced settings in the job wizard" . Click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. CLI procedure Enter the following command on Satellite: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace My_Search_Query with the filter expression that defines hosts, for example "name ~ My_Pattern " . For more information about executing remote commands with hammer, enter hammer job-template --help and hammer job-invocation --help . 12.23. Advanced settings in the job wizard Some job templates require you to enter advanced settings. Some of the advanced settings are only visible to certain job templates. Below is the list of general advanced settings. SSH user A user to be used for connecting to the host through SSH. Effective user A user to be used for executing the job. By default it is the SSH user. If it differs from the SSH user, su or sudo, depending on your settings, is used to switch the accounts. If you set an effective user in the advanced settings, Ansible sets ansible_become_user to your input value and ansible_become to true . This means that if you use the parameters become: true and become_user: My_User within a playbook, these will be overwritten by Satellite. If your SSH user and effective user are identical, Satellite does not overwrite the become_user . Therefore, you can set a custom become_user in your Ansible playbook. Description A description template for the job. Timeout to kill Time in seconds from the start of the job after which the job should be killed if it is not finished already. Time to pickup Time in seconds after which the job is canceled if it is not picked up by a client. This setting only applies to hosts using pull-mqtt transport. Password Is used if SSH authentication method is a password instead of the SSH key. Private key passphrase Is used if SSH keys are protected by a passphrase. Effective user password Is used if effective user is different from the ssh user. Concurrency level Defines the maximum number of jobs executed at once. This can prevent overload of system resources in a case of executing the job on a large number of hosts. Execution ordering Determines the order in which the job is executed on hosts. It can be alphabetical or randomized. 12.24. Using extended cron lines When scheduling a cron job with remote execution, you can use an extended cron line to specify the cadence of the job. The standard cron line contains five fields that specify minute, hour, day of the month, month, and day of the week. For example, 0 5 * * * stands for every day at 5 AM. The extended cron line provides the following features: You can use # to specify a concrete week day in a month For example: 0 0 * * mon#1 specifies first Monday of the month 0 0 * * fri#3,fri#4 specifies 3rd and 4th Fridays of the month 0 7 * * fri#-1 specifies the last Friday of the month at 07:00 0 7 * * fri#L also specifies the last Friday of the month at 07:00 0 23 * * mon#2,tue specifies the 2nd Monday of the month and every Tuesday, at 23:00 You can use % to specify every n-th day of the month For example: 9 0 * * sun%2 specifies every other Sunday at 00:09 0 0 * * sun%2+1 specifies every odd Sunday 9 0 * * sun%2,tue%3 specifies every other Sunday and every third Tuesday You can use & to specify that the day of the month has to match the day of the week For example: 0 0 30 * 1& specifies 30th day of the month, but only if it is Monday 12.25. Scheduling a recurring Ansible job for a host You can schedule a recurring job to run Ansible roles on hosts. Prerequisites Ensure you have the view_foreman_tasks , view_job_invocations , and view_recurring_logics permissions. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 12.26. Scheduling a recurring Ansible job for a host group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 12.27. Monitoring jobs You can monitor the progress of a job while it is running. This can help in any troubleshooting that may be required. Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch. Procedure In the Satellite web UI, navigate to Monitor > Jobs . This page is automatically displayed if you triggered the job with the Execute now setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running. In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time. Click Back to Job at any time to return to the Job Details page. CLI procedure Find the ID of a job: Monitor the job output: Optional: To cancel a job, enter the following command: 12.28. Using Ansible provider for package and errata actions By default, Satellite is configured to use the Script provider templates for remote execution jobs. If you prefer using Ansible job templates for your remote jobs, you can configure Satellite to use them by default for remote execution features associated with them. Note Remember that Ansible job templates only work when remote execution is configured for ssh mode. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Find each feature whose name contains by_search . Change the job template for these features from Katello Script Default to Katello Ansible Default . Click Submit . Satellite now uses Ansible provider templates for remote execution jobs by which you can perform package and errata actions. This applies to job invocations from the Satellite web UI as well as by using hammer job-invocation create with the same remote execution features that you have changed. 12.29. Setting the job rate limit on Capsule You can limit the maximum number of active jobs on a Capsule at a time to prevent performance spikes. The job is active from the time Capsule first tries to notify the host about the job until the job is finished on the host. The job rate limit only applies to mqtt based jobs. Note The optimal maximum number of active jobs depends on the computing resources of your Capsule Server. By default, the maximum number of active jobs is unlimited. Procedure Set the maximum number of active jobs using satellite-installer : For example:
[ "name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh", "dnf install katello-pull-transport-migrate", "yum install katello-pull-transport-migrate", "systemctl status yggdrasild", "hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH", "curl -X GET -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_capsule_ID", "curl -X PUT -H 'Content-Type: application/json' -d '{ \"playbook_names\": [\" My_Playbook_Name \"] }' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID", "curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID", "hammer settings set --name=remote_execution_fallback_proxy --value=true", "hammer settings set --name=remote_execution_global_proxy --value=true", "mkdir /My_Remote_Working_Directory", "chcon --reference=/var/tmp /My_Remote_Working_Directory", "satellite-installer --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory", "mkdir /My_Remote_Working_Directory", "systemctl edit yggdrasild", "Environment=FOREMAN_YGG_WORKER_WORKDIR= /My_Remote_Working_Directory", "systemctl restart yggdrasild", "ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]", "ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]", "ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy", "mkdir ~/.ssh", "curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys", "chmod 700 ~/.ssh", "chmod 600 ~/.ssh/authorized_keys", "<%= snippet 'remote_execution_ssh_keys' %>", "id -u foreman-proxy", "umask 077", "mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"", "cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab", "chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"", "chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"", "restorecon -RvF /var/kerberos/krb5", "satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true", "hostgroup_fullname ~ \" My_Host_Group *\"", "hammer settings set --name=remote_execution_global_proxy --value=false", "hammer job-template list", "hammer job-template info --id My_Template_ID", "hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"", "hammer job-invocation list", "hammer job-invocation output --host My_Host_Name --id My_Job_ID", "hammer job-invocation cancel --id My_Job_ID", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/Configuring_and_Setting_Up_Remote_Jobs_managing-hosts
Chapter 15. Integrating with Apache ActiveMQ
Chapter 15. Integrating with Apache ActiveMQ Overview If you are using Apache ActiveMQ as your JMS provider, the JNDI name of your destinations can be specified in a special format that dynamically creates JNDI bindings for queues or topics. This means that it is not necessary to configure the JMS provider in advance with the JNDI bindings for your queues or topics. The initial context factory The key to integrating Apache ActiveMQ with JNDI is the ActiveMQInitialContextFactory class. This class is used to create a JNDI InitialContext instance, which you can then use to access JMS destinations in the JMS broker. Example 15.1, "SOAP/JMS WSDL to connect to Apache ActiveMQ" shows SOAP/JMS WSDL extensions to create a JNDI InitialContext that is integrated with Apache ActiveMQ. Example 15.1. SOAP/JMS WSDL to connect to Apache ActiveMQ In Example 15.1, "SOAP/JMS WSDL to connect to Apache ActiveMQ" , the Apache ActiveMQ client connects to the broker port located at tcp://localhost:61616 . Looking up the connection factory As well as creating a JNDI InitialContext instance, you must specify the JNDI name that is bound to a javax.jms.ConnectionFactory instance. In the case of Apache ActiveMQ, there is a predefined binding in the InitialContext instance, which maps the JNDI name ConnectionFactory to an ActiveMQConnectionFactory instance. Example 15.2, "SOAP/JMS WSDL for specifying the Apache ActiveMQ connection factory" shaows the SOAP/JMS extension element for specifying the Apache ActiveMQ connection factory. Example 15.2. SOAP/JMS WSDL for specifying the Apache ActiveMQ connection factory Syntax for dynamic destinations To access queues or topics dynamically, specify the destination's JNDI name as a JNDI composite name in either of the following formats: QueueName and TopicName are the names that the Apache ActiveMQ broker uses. They are not abstract JNDI names. Example 15.3, "WSDL port specification with a dynamically created queue" shows a WSDL port that uses a dynamically created queue. Example 15.3. WSDL port specification with a dynamically created queue When the application attempts to open the JMS connection, Apache ActiveMQ will check to see if a queue with the JNDI name greeter.request.queue exists. If it does not exist, it will create a new queue and bind it to the JNDI name greeter.request.queue .
[ "<soapjms:jndiInitialContextFactory> org.apache.activemq.jndi.ActiveMQInitialContextFactory </soapjms:jndiInitialContextFactory> <soapjms:jndiURL>tcp://localhost:61616</soapjms:jndiURL>", "<soapjms:jndiConnectionFactoryName> ConnectionFactory </soapjms:jndiConnectionFactoryName>", "dynamicQueues/ QueueName dynamicTopics/ TopicName", "<service name=\"JMSService\"> <port binding=\"tns:GreeterBinding\" name=\"JMSPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/greeter.request.queue\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> </port> </service>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfamqintegration
10.4. Problems After Installation
10.4. Problems After Installation 10.4.1. Trouble With the Graphical GRUB Screen on an x86-based System? If you are experiencing problems with GRUB, you may need to disable the graphical boot screen. To do this, become the root user and edit the /boot/grub/grub.conf file. Within the grub.conf file, comment out the line which begins with splashimage by inserting the # character at the beginning of the line. Press Enter to exit the editing mode. Once the boot loader screen has returned, type b to boot the system. Once you reboot, the grub.conf file is reread and any changes you have made take effect. You may re-enable the graphical boot screen by uncommenting (or adding) the above line back into the grub.conf file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch10s04
6.2. VDB Definition: The VDB Element
6.2. VDB Definition: The VDB Element Attributes name The name of the VDB. The VDB name referenced through the driver or datasource during the connection time. version The version of the VDB (should be an positive integer). This determines the deployed directory location (see Name), and provides an explicit versioning mechanism to the VDB name. Property Elements cache-metadata Can be "true" or "false". If "false", JBoss Data Virtualization will obtain metadata once for every launch of the VDB. "true" will save a file containing the metadata into the EAP_HOME / MODE /data directory. Defaults to "false" for -vdb.xml deployments otherwise "true". query-timeout Sets the default query timeout in milliseconds for queries executed against this VDB. 0 indicates that the server default query timeout should be used. Defaults to 0. Will have no effect if the server default query timeout is set to a lesser value. Note that clients can still set their own timeouts that will be managed on the client side. lib Set to a list of modules for the VDB classpath for user defined function loading. See also Support for Non-Pushdown User Defined Functions in Red Hat JBoss Data Virtualization Development Guide: Server Development . security-domain Set to the security domain to use if a specific security domain is applicable to the VDB. Otherwise the security domain list from the transport will be used. Important An administrator needs to configure a matching "custom-security" login module in the standalone.xml configuration file before the VDB is deployed. connection.XXX This is for use by the ODBC transport and OData. They use it to set the default connection/execution properties. Note that the properties are set on the connection after it has been established. authentication-type Authentication type of configured security domain. Allowed values currently are (GSS, USERPASSWORD). The default is set on the transport (typically USERPASSWORD). password-pattern Regular expression matched against the connecting user's name that determines if USERPASSWORD authentication is used. password-pattern Takes precedence of over authentication-type. The default is authentication-type. gss-pattern Regular expression matched against the connecting user's name that determines if GSS authentication is used. gss-pattern Takes precedence of over password-pattern. The default is password-pattern. model.visible Used to override the visibility of imported vdb models, where model is the name of the imported model.. include-pg-metadata By default, PG metadata is always added to VDB unless System Properties set property org.teiid.addPGMetadata to false. This property enables adding PG metadata per VDB. Please note that if you are using ODBC to access your VDB, the VDB must include PG metadata. lazy-invalidate By default TTL expiration will be invalidating. Setting lazy-invalidate to true makes ttl refreshes non-invalidating.
[ "<property name=\"security-domain\" value=\"custom-security\" />", "<property name=\"connection.partialResultsMode\" value=\"true\" />" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/VDB_Definition_The_vbd_Element
Chapter 9. Using the the web console for managing virtual machines
Chapter 9. Using the the web console for managing virtual machines To manage virtual machines in a graphical interface, you can use the Virtual Machines pane in the the web console . The following sections describe the web console's virtualization management capabilities and provide instructions for using them. 9.1. Overview of virtual machine management using the the web console The the web console is a web-based interface for system administration. With the installation of a web console plug-in, the web console can be used to manage virtual machines (VMs) on the servers to which the web console can connect. It provides a graphical view of VMs on a host system to which the web console can connect, and allows monitoring system resources and adjusting configuration with ease. Using the the web console for VM management, you can do the following: Create and delete VMs Install operating systems on VMs Run and shut down VMs View information about VMs Create and attach disks to VMs Configure virtual CPU settings for VMs Manage virtual network interfaces Interact with VMs using VM consoles 9.2. Setting up the the web console to manage virtual machines Before using the the web console to manage VMs, you must install the web console virtual machine plug-in. Prerequisites Ensure that the web console is installed on your machine. Procedure Install the cockpit-machines plug-in. If the installation is successful, Virtual Machines appears in the web console side menu. 9.3. Creating virtual machines and installing guest operating systems using the the web console The following sections provide information on how to use the the web console to create virtual machines (VMs) and install operating systems on VMs. 9.3.1. Creating virtual machines using the the web console To create a VM on the host machine to which the web console is connected, follow the instructions below. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Before creating VMs, consider the amount of system resources you need to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs. A locally available operating system (OS) installation source, which can be one of the following: An ISO image of an installation medium A disk image of an existing guest installation Procedure Click Create VM in the Virtual Machines interface of the the web console. The Create New Virtual Machine dialog appears. Enter the basic configuration of the virtual machine you want to create. Connection - The connection to the host to be used by the virtual machine. Name - The name of the virtual machine. Installation Source Type - The type of the installation source: Filesystem, URL Installation Source - The path or URL that points to the installation source. OS Vendor - The vendor of the virtual machine's operating system. Operating System - The virtual machine's operating system. Memory - The amount of memory with which to configure the virtual machine. Storage Size - The amount of storage space with which to configure the virtual machine. Immediately Start VM - Whether or not the virtual machine will start immediately after it is created. Click Create . The virtual machine is created. If the Immediately Start VM checkbox is selected, the VM will immediately start and begin installing the guest operating system. You must install the operating system the first time the virtual machine is run. Additional resources For information on installing an operating system on a virtual machine, see Section 9.3.2, "Installing operating systems using the the web console" . 9.3.2. Installing operating systems using the the web console The first time a virtual machine loads, you must install an operating system on the virtual machine. Prerequisites Before using the the web console to manage virtual machines, you must install the web console virtual machine plug-in. A VM on which to install an operating system. Procedure Click Install . The installation routine of the operating system runs in the virtual machine console. Note If the Immediately Start VM checkbox in the Create New Virtual Machine dialog is checked, the installation routine of the operating system starts automatically when the virtual machine is created. Note If the installation routine fails, the virtual machine must be deleted and recreated. 9.4. Deleting virtual machines using the the web console You can delete a virtual machine and its associated storage files from the host to which the the web console is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure In the Virtual Machines interface of the the web console, click the name of the VM you want to delete. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Delete . A confirmation dialog appears. [Optional] To delete all or some of the storage files associated with the virtual machine, select the checkboxes to the storage files you want to delete. Click Delete . The virtual machine and any selected associated storage files are deleted. 9.5. Powering up and powering down virtual machines using the the web console Using the the web console, you can run , shut down , and restart virtual machines. You can also send a non-maskable interrupt to a virtual machine that is unresponsive. 9.5.1. Powering up virtual machines in the the web console If a VM is in the shut off state, you can start it using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to start. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Run . The virtual machine starts. Additional resources For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.2. Powering down virtual machines in the the web console If a virtual machine is in the running state, you can shut it down using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to shut down. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Shut Down . The virtual machine shuts down. Note If the virtual machine does not shut down, click the arrow to the Shut Down button and select Force Shut Down . Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.3. Restarting virtual machines using the the web console If a virtual machine is in the running state, you can restart it using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to restart. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Restart . The virtual machine shuts down and restarts. Note If the virtual machine does not restart, click the arrow to the Restart button and select Force Restart . Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.4. Sending non-maskable interrupts to VMs using the the web console Sending a non-maskable interrupt (NMI) may cause an unresponsive running VM to respond or shut down. For example, you can send the Ctrl + Alt + Del NMI to a VM that is not responsive. Prerequisites Before using the the web console to manage VMs, you must install the web console virtual machine plug-in. Procedure Click a row with the name of the virtual machine to which you want to send an NMI. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click the arrow to the Shut Down button and select Send Non-Maskable Interrupt . An NMI is sent to the virtual machine. Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . 9.6. Viewing virtual machine information using the the web console Using the the web console, you can view information about the virtual storage and VMs to which the web console is connected. 9.6.1. Viewing a virtualization overview in the the web console The following describes how to view an overview of the available virtual storage and the VMs to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the available storage and the virtual machines to which the web console is attached. Click Virtual Machines in the web console's side menu. Information about the available storage and the virtual machines to which the web console session is connected appears. The information includes the following: Storage Pools - The number of storage pools that can be accessed by the web console and their state. Networks - The number of networks that can be accessed by the web console and their state. Name - The name of the virtual machine. Connection - The type of libvirt connection, system or session. State - The state of the virtual machine. Additional resources For information on viewing detailed information about the storage pools the web console session can access, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.2. Viewing storage pool information using the the web console The following describes how to view detailed storage pool information about the storage pools that the web console session can access. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view storage pool information: Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears showing a list of configured storage pools. The information includes the following: Name - The name of the storage pool. Size - The size of the storage pool. Connection - The connection used to access the storage pool. State - The state of the storage pool. Click a row with the name of the storage whose information you want to see. The row expands to reveal the Overview pane with following information about the selected storage pool: Path - The path to the storage pool. Persistent - Whether or not the storage pool is persistent. Autostart - Whether or not the storage pool starts automatically. Type - The storage pool type. To view a list of storage volumes created from the storage pool, click Storage Volumes . The Storage Volumes pane appears showing a list of configured storage volumes with their sizes and the amount of space used. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.3. Viewing basic virtual machine information in the the web console The following describes how to view basic information about a selected virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view basic information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Note If another tab is selected, click Overview . The information includes the following: Memory - The amount of memory assigned to the virtual machine. Emulated Machine - The machine type emulated by the virtual machine. vCPUs - The number of virtual CPUs configured for the virtual machine. Note To see more detailed virtual CPU information and configure the virtual CPUs configured for a virtual machine, see Section 9.7, "Managing virtual CPUs using the the web console" . Boot Order - The boot order configured for the virtual machine. CPU Type - The architecture of the virtual CPUs configured for the virtual machine. Autostart - Whether or not autostart is enabled for the virtual machine. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.4. Viewing virtual machine resource usage in the the web console The following describes how to view resource usage information about a selected virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the memory and virtual CPU usage of a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Usage . The Usage pane appears with information about the memory and virtual CPU usage of the virtual machine. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.5. Viewing virtual machine disk information in the the web console The following describes how to view disk information about a virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view disk information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks assigned to the virtual machine. The information includes the following: Device - The device type of the disk. Target - The controller type of the disk. Used - The amount of the disk that is used. Capacity - The size of the disk. Bus - The bus type of the disk. Readonly - Whether or not the disk is read-only. Source - The disk device or file. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.6. Viewing virtual NIC information in the the web console The following describes how to view information about the virtual network interface cards (vNICs) on a selected virtual machine: Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the virtual network interface cards (NICs) on a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. The information includes the following: Type - The type of network interface for the virtual machine. Types include direct, network, bridge, ethernet, hostdev, mcast, user, and server. Model type - The model of the virtual NIC. MAC Address - The MAC address of the virtual NIC. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual NIC. To edit the virtual network settings, Click Edit . The Virtual Network Interface Settings. Change the Network Type and Model. Click Save . The network interface is modified. Note When the virtual machine is running, changes to the virtual network interface settings only take effect after the virtual machine is stopped and restarted. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . 9.7. Managing virtual CPUs using the the web console Using the the web console, you can manage the virtual CPUs configured for the virtual machines to which the web console is connected. You can view information about the virtual machines. You can also configure the virtual CPUs for virtual machines. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine for which you want to view and configure virtual CPU parameters. The row expands to reveal the Overview pane with basic information about the selected virtual machine, including the number of virtual CPUs, and controls for shutting down and deleting the virtual machine. Click the number of vCPUs in the Overview pane. The vCPU Details dialog appears. Note The warning in the vCPU Details dialog only appears after the virtual CPU settings are changed. Configure the virtual CPUs for the selected virtual machine. vCPU Count - Enter the number of virtual CPUs for the virtual machine. Note The vCPU count cannot be greater than the vCPU Maximum. vCPU Maximum - Enter the maximum number of virtual CPUs that can be configured for the virtual machine. Sockets - Select the number of sockets to expose to the virtual machine. Cores per socket - Select the number of cores for each socket to expose to the virtual machine. Threads per core - Select the number of threads for each core to expose to the virtual machine. Click Apply . The virtual CPUs for the virtual machine are configured. Note When the virtual machine is running, changes to the virtual CPU settings only take effect after the virtual machine is stopped and restarted. 9.8. Managing virtual machine disks using the the web console Using the the web console, you can manage the disks configured for the virtual machines to which the web console is connected. You can: View information about disks. Create and attach new virtual disks to virtual machines. Attach existing virtual disks to virtual machines. Detach virtual disks from virtual machines. 9.8.1. Viewing virtual machine disk information in the the web console The following describes how to view disk information about a virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view disk information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks assigned to the virtual machine. The information includes the following: Device - The device type of the disk. Target - The controller type of the disk. Used - The amount of the disk that is used. Capacity - The size of the disk. Bus - The bus type of the disk. Readonly - Whether or not the disk is read-only. Source - The disk device or file. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.8.2. Adding new disks to virtual machines using the the web console You can add new disks to virtual machines by creating a new disk (storage pool) and attaching it to a virtual machine using the the web console. Note You can only use directory-type storage pools when creating new disks for virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine for which you want to create and attach a new disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click Add Disk . The Add Disk dialog appears. Ensure that the Create New option button is selected. Configure the new disk. Pool - Select the storage pool from which the virtual disk will be created. Target - Select a target for the virtual disk that will be created. Name - Enter a name for the virtual disk that will be created. Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created. Format - Select the format for the virtual disk that will be created. Supported types: qcow2, raw Persistence - Whether or not the virtual disk will be persistent. If checked, the virtual disk is persistent. If not checked, the virtual disk is not persistent. Note Transient disks can only be added to VMs that are running. Click Add . The virtual disk is created and connected to the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on attaching existing disks to virtual machines, see Section 9.8.3, "Attaching existing disks to virtual machines using the the web console" . For information on detaching disks from virtual machines, see Section 9.8.4, "Detaching disks from virtual machines" . 9.8.3. Attaching existing disks to virtual machines using the the web console The following describes how to attach existing disks to a virtual machine using the the web console. Note You can only attach directory-type storage pools to virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine to which you want to attach an existing disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click Add Disk . The Add Disk dialog appears. Click the Use Existing option button. The appropriate configuration fields appear in the Add Disk dialog. Configure the disk for the virtual machine. Pool - Select the storage pool from which the virtual disk will be attached. Target - Select a target for the virtual disk that will be attached. Volume - Select the storage volume that will be attached. Persistence - Check to make the virtual disk persistent. Clear to make the virtual disk transient. Click Add The selected virtual disk is attached to the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on creating new disks and attaching them to virtual machines, see Section 9.8.2, "Adding new disks to virtual machines using the the web console" . For information on detaching disks from virtual machines, see Section 9.8.4, "Detaching disks from virtual machines" . 9.8.4. Detaching disks from virtual machines The following describes how to detach disks from virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine from which you want to detach an existing disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click to the disk you want to detach from the virtual machine. The virtual disk is detached from the virtual machine. Caution There is no confirmation before detaching the disk from the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on creating new disks and attaching them to virtual machines, see Section 9.8.2, "Adding new disks to virtual machines using the the web console" . For information on attaching existing disks to virtual machines, see Section 9.8.3, "Attaching existing disks to virtual machines using the the web console" . 9.9. Using the the web console for managing virtual machine vNICs Using the the web console, you can manage the virtual network interface cards (vNICs) configured for the virtual machines to which the web console is connected. You can view information about vNICs. You can also connect and disconnect vNICs from virtual machines. 9.9.1. Viewing virtual NIC information in the the web console The following describes how to view information about the virtual network interface cards (vNICs) on a selected virtual machine: Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the virtual network interface cards (NICs) on a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. The information includes the following: Type - The type of network interface for the virtual machine. Types include direct, network, bridge, ethernet, hostdev, mcast, user, and server. Model type - The model of the virtual NIC. MAC Address - The MAC address of the virtual NIC. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual NIC. To edit the virtual network settings, Click Edit . The Virtual Network Interface Settings. Change the Network Type and Model. Click Save . The network interface is modified. Note When the virtual machine is running, changes to the virtual network interface settings only take effect after the virtual machine is stopped and restarted. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . 9.9.2. Connecting virtual NICs in the the web console Using the the web console, you can reconnect disconnected virtual network interface cards (NICs) configured for a selected virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose virtual NIC you want to connect. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. Click Plug in the row of the virtual NIC you want to connect. The selected virtual NIC connects to the virtual machine. 9.9.3. Disconnecting virtual NICs in the the web console Using the the web console, you can disconnect the virtual network interface cards (NICs) connected to a selected virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose virtual NIC you want to disconnect. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. Click Unplug in the row of the virtual NIC you want to disconnect. The selected virtual NIC disconnects from the virtual machine. 9.10. Interacting with virtual machines using the the web console To interact with a VM in the the web console, you need to connect to the VM's console. Using the the web console, you can view the virtual machine's consoles. These include both graphical and serial consoles. To interact with the VM's graphical interface in the the web console, use the graphical console in the the web console . To interact with the VM's graphical interface in a remote viewer, use the graphical console in remote viewers . To interact with the VM's CLI in the the web console, use the serial console in the the web console . 9.10.1. Viewing the virtual machine graphical console in the the web console You can view the graphical console of a selected virtual machine in the the web console. The virtual machine console shows the graphical output of the virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Ensure that both the host and the VM support a graphical interface. Procedure Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. Note The server on which the the web console is running can intercept specific key combinations, such as Ctrl + Alt + F1 , preventing them from being sent to the virtual machine. To send such key combinations, click the Send key menu and select the key sequence to send. For example, to send the Ctrl + Alt + F1 combination to the virtual machine, click the Send key menu and select the Ctrl+Alt+F1 menu entry. Additional Resources For details on viewing the graphical console in a remote viewer, see Section 9.10.2, "Viewing virtual machine consoles in remote viewers using the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.2. Viewing virtual machine consoles in remote viewers using the the web console You can view the virtual machine's consoles in a remote viewer. The connection can be made by the web console or manually. 9.10.2.1. Viewing the graphical console in a remote viewer You can view the graphical console of a selected virtual machine in a remote viewer. The virtual machine console shows the graphical output of the virtual machine. Note You can launch Virt Viewer from within the the web console. Other VNC and SPICE remote viewers can be launched manually. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Ensure that both the host and the VM support a graphical interface. Before you can view the graphical console in Virt Viewer, Virt Viewer must be installed on the machine to which the web console is connected. To view information on installing Virt Viewer, select the Graphics Console in Desktop Viewer Console Type and click More Information in the Consoles window. Note Some browser extensions and plug-ins do not allow the web console to open Virt Viewer. Procedure Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Graphics Console in Desktop Viewer Console Type. Click Launch Remote Viewer . The graphical console appears in Virt Viewer. You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. Note The server on which the the web console is running can intercept specific key combinations, such as Ctrl + Alt + F1 , preventing them from being sent to the virtual machine. To send such key combinations, click the Send key menu and select the key sequence to send. For example, to send the Ctrl + Alt + F1 combination to the virtual machine, click the Send key menu and select the Ctrl+Alt+F1 menu entry. Additional Resources For details on viewing the graphical console in a remote viewer using a manual connection, see Section 9.10.2.2, "Viewing the graphical console in a remote viewer connecting manually" . For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.2.2. Viewing the graphical console in a remote viewer connecting manually You can view the graphical console of a selected virtual machine in a remote viewer. The virtual machine console shows the graphical output of the virtual machine. The web interface provides the information necessary to launch any SPICE or VNC viewer to view the virtual machine console. w Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Before you can view the graphical console in a remote viewer, ensure that a SPICE or VNC viewer application is installed on the machine to which the web console is connected. To view information on installing Virt Viewer, select the Graphics Console in Desktop Viewer Console Type and click More Information in the Consoles window. Procedure You can view the virtual machine graphics console in any SPICE or VNC viewer application. Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Graphics Console in Desktop Viewer Console Type. The following Manual Connection information appears on the right side of the pane. Enter the information in the SPICE or VNC viewer. For more information, see the documentation for the SPICE or VNC viewer. Additional Resources For details onviewing the graphical console in a remote viewer using the the web console to make the connection, see Section 9.10.2.1, "Viewing the graphical console in a remote viewer" . For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.3. Viewing the virtual machine serial console in the the web console You can view the serial console of a selected virtual machine in the the web console. This is useful when the host machine or the VM is not configured with a graphical interface. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose serial console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Serial Console Console Type. The serial console appears in the web interface. You can disconnect and reconnect the serial console from the virtual machine. To disconnect the serial console from the virtual machine, click Disconnect . To reconnect the serial console to the virtual machine, click Reconnect . Additional Resources For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the graphical console in a remote viewer, see Section 9.10.2, "Viewing virtual machine consoles in remote viewers using the the web console" . 9.11. Creating storage pools using the the web console You can create storage pools using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in. If the web console plug-in is not installed, see Section 9.2, "Setting up the the web console to manage virtual machines" for information about installing the web console virtual machine plug-in. Procedure Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears showing a list of configured storage pools. Click Create Storage Pool . The Create Storage Pool dialog appears. Enter the following information in the Create Storage Pool dialog: Connection - The connection to the host to be used by the storage pool. Name - The name of the storage pool. Type - The type of the storage pool: Filesystem Directory, Network File System Target Path - The storage pool path on the host's file system. Startup - Whether or not the storage pool starts when the host boots. Click Create . The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools. Related information For information on viewing information about storage pools using the the web console, see Section 9.6.2, "Viewing storage pool information using the the web console" .
[ "yum info cockpit Installed Packages Name : cockpit [...]", "yum install cockpit-machines" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/managing_systems_using_the_rhel_7_web_console/using-the-rhel-8-web-console-for-managing-vms_system-management-using-the-RHEL-7-web-console
6.5. Configuring Fence Devices
6.5. Configuring Fence Devices Configuring fence devices consists of creating, updating, and deleting fence devices for the cluster. You must create and name the fence devices in a cluster before you can configure fencing for the nodes in the cluster. For information on configuring fencing for the individual nodes in the cluster, see Section 6.7, "Configuring Fencing for Cluster Members" . Before configuring your fence devices, you may want to modify some of the fence daemon properties for your system from the default values. The values you configure for the fence daemon are general values for the cluster. The general fencing properties for the cluster you may want to modify are summarized as follows: The post_fail_delay attribute is the number of seconds the fence daemon ( fenced ) waits before fencing a node (a member of the fence domain) after the node has failed. The post_fail_delay default value is 0 . Its value may be varied to suit cluster and network performance. The post-join_delay attribute is the number of seconds the fence daemon ( fenced ) waits before fencing a node after the node joins the fence domain. The post_join_delay default value is 6 . A typical setting for post_join_delay is between 20 and 30 seconds, but can vary according to cluster and network performance. You reset the values of the post_fail_delay and post_join_delay attributes with the --setfencedaemon option of the ccs command. Note, however, that executing the ccs --setfencedaemon command overwrites all existing fence daemon properties that have been explicitly set and restores them to their default values. For example, to configure a value for the post_fail_delay attribute, execute the following command. This command will overwrite the values of all other existing fence daemon properties that you have set with this command and restore them to their default values. To configure a value for the post_join_delay attribute, execute the following command. This command will overwrite the values of all other existing fence daemon properties that you have set with this command and restore them to their default values. To configure a value for both the post_join_delay attribute and the post_fail_delay attribute, execute the following command: Note For more information about the post_join_delay and post_fail_delay attributes as well as the additional fence daemon properties you can modify, see the fenced (8) man page and see the cluster schema at /usr/share/cluster/cluster.rng , and the annotated schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html . To configure a fence device for a cluster, execute the following command: For example, to configure an APC fence device in the configuration file on the cluster node node1 named my_apc with an IP address of apc_ip_example , a login of login_example , and a password of password_example , execute the following command: The following example shows the fencedevices section of the cluster.conf configuration file after you have added this APC fence device: When configuring fence devices for a cluster, you may find it useful to see a listing of available devices for your cluster and the options available for each device. You may also find it useful to see a listing of fence devices currently configured for your cluster. For information on using the ccs command to print a list of available fence devices and options or to print a list of fence devices currently configured for your cluster, see Section 6.6, "Listing Fence Devices and Fence Device Options" . To remove a fence device from your cluster configuration, execute the following command: For example, to remove a fence device that you have named myfence from the cluster configuration file on cluster node node1 , execute the following command: If you need to modify the attributes of a fence device you have already configured, you must first remove that fence device then add it again with the modified attributes. Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" .
[ "ccs -h host --setfencedaemon post_fail_delay= value", "ccs -h host --setfencedaemon post_join_delay= value", "ccs -h host --setfencedaemon post_fail_delay= value post_join_delay= value", "ccs -h host --addfencedev devicename [ fencedeviceoptions ]", "ccs -h node1 --addfencedev myfence agent=fence_apc ipaddr=apc_ip_example login=login_example passwd=password_example", "<fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"my_apc\" passwd=\"password_example\"/> </fencedevices>", "ccs -h host --rmfencedev fence_device_name", "ccs -h node1 --rmfencedev myfence" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-fence-devices-ccs-CA