title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
19.5. Small File Performance Enhancements
|
19.5. Small File Performance Enhancements The ratio of the time taken to perform operations on the metadata of a file to performing operations on its data determines the difference between large files and small files. Metadata-intensive workload is the term used to identify such workloads. A few performance enhancements can be made to optimize the network and storage performance and minimize the effect of slow throughput and response time for small files in a Red Hat Gluster Storage trusted storage pool. Note For a small-file workload, activate the rhgs-random-io tuned profile. Configuring Threads for Event Processing You can set the client.event-thread and server.event-thread values for the client and server components. Setting the value to 4, for example, would enable handling four network connections simultaneously. Setting the event threads value for a client You can tune the Red Hat Gluster Storage Server performance by tuning the event thread values. Example 19.1. Tuning the event threads for a client accessing a volume Setting the event thread value for a server You can tune the Red Hat Gluster Storage Server performance using event thread values. Example 19.2. Tuning the event threads for a server accessing a volume Verifying the event thread values You can verify the event thread values that are set for the client and server components by executing the following command: See topic, Configuring Volume Options for information on the minimum, maximum, and default values for setting these volume options. Best practices to tune event threads It is possible to see performance gains with the Red Hat Gluster Storage stack by tuning the number of threads processing events from network connections. The following are the recommended best practices to tune the event thread values. As each thread processes a connection at a time, having more threads than connections to either the brick processes ( glusterfsd ) or the client processes ( glusterfs or gfapi ) is not recommended. Due to this reason, monitor the connection counts (using the netstat command) on the clients and on the bricks to arrive at an appropriate number for the event thread count. Configuring a higher event threads value than the available processing units could again cause context switches on these threads. As a result reducing the number deduced from the step to a number that is less that the available processing units is recommended. If a Red Hat Gluster Storage volume has a high number of brick processes running on a single node, then reducing the event threads number deduced in the step would help the competing processes to gain enough concurrency and avoid context switches across the threads. If a specific thread consumes more number of CPU cycles than needed, increasing the event thread count would enhance the performance of the Red Hat Gluster Storage Server. In addition to the deducing the appropriate event-thread count, increasing the server.outstanding-rpc-limit on the storage nodes can also help to queue the requests for the brick processes and not let the requests idle on the network queue. Another parameter that could improve the performance when tuning the event-threads value is to set the performance.io-thread-count (and its related thread-counts) to higher values, as these threads perform the actual IO operations on the underlying file system. 19.5.1. Enabling Lookup Optimization Distribute xlator (DHT) has a performance penalty when it deals with negative lookups. Negative lookups are lookup operations for entries that does not exist in the volume. A lookup for a file/directory that does not exist is a negative lookup. Negative lookups are expensive and typically slows down file creation, as DHT attempts to find the file in all sub-volumes. This especially impacts small file performance, where a large number of files are being added/created in quick succession to the volume. The negative lookup fan-out behavior can be optimized by not performing the same in a balanced volume. The cluster.lookup-optimize configuration option enables DHT lookup optimization. To enable this option run the following command: Note The configuration takes effect for newly created directories immediately post setting the above option. For existing directories, a rebalance is required to ensure the volume is in balance before DHT applies the optimization on older directories.
|
[
"gluster volume set VOLNAME client.event-threads <value>",
"gluster volume set test-vol client.event-threads 4",
"gluster volume set VOLNAME server.event-threads <value>",
"gluster volume set test-vol server.event-threads 4",
"gluster volume info VOLNAME",
"gluster volume set VOLNAME cluster.lookup-optimize <on/off>\\"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements
|
13.18. Begin Installation
|
13.18. Begin Installation When all required sections of the Installation Summary screen have been completed, the admonition at the bottom of the menu screen disappears and the Begin Installation button becomes available. Figure 13.36. Ready to Install Warning Up to this point in the installation process, no lasting changes have been made on your computer. When you click Begin Installation , the installation program will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, return to the relevant section of the Installation Summary screen. To cancel installation completely, click Quit or switch off your computer. To switch off most computers at this stage, press the power button and hold it down for a few seconds. If you have finished customizing your installation and are certain that you want to proceed, click Begin Installation . After you click Begin Installation , allow the installation process to complete. If the process is interrupted, for example, by you switching off or resetting the computer, or by a power outage, you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-write-changes-to-disk-ppc
|
Chapter 68. KafkaConnect schema reference
|
Chapter 68. KafkaConnect schema reference Property Description spec The specification of the Kafka Connect cluster. KafkaConnectSpec status The status of the Kafka Connect cluster. KafkaConnectStatus
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaconnect-reference
|
Chapter 1. Overview of nodes
|
Chapter 1. Overview of nodes 1.1. About nodes A node is a virtual or bare-metal machine in a Kubernetes cluster. Worker nodes host your application containers, grouped as pods. The control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane nodes contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. Having stable and healthy nodes in a cluster is fundamental to the smooth functioning of your hosted application. In OpenShift Container Platform, you can access, manage, and monitor a node through the Node object representing the node. Using the OpenShift CLI ( oc ) or the web console, you can perform the following operations on a node. Read operations The read operations allow an administrator or a developer to get information about nodes in an OpenShift Container Platform cluster. List all the nodes in a cluster . Get information about a node, such as memory and CPU usage, health, status, and age. List pods running on a node . Management operations As an administrator, you can easily manage a node in an OpenShift Container Platform cluster through several tasks: Add or update node labels . A label is a key-value pair applied to a Node object. You can control the scheduling of pods using labels. Change node configuration using a custom resource definition (CRD), or the kubeletConfig object. Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a Ready status allow pod placement by default while the control plane nodes do not; you can change this default behavior by configuring the worker nodes to be unschedulable and the control plane nodes to be schedulable . Allocate resources for nodes using the system-reserved setting. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes. Configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit, or both. Reboot a node gracefully using pod anti-affinity . Delete a node from a cluster by scaling down the cluster using a machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node. Enhancement operations OpenShift Container Platform allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers. Manage node-level tuning for high-performance applications that require some level of kernel tuning by using the Node Tuning Operator . Run background tasks on nodes automatically with daemon sets . You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes. Free node resources using garbage collection . You can ensure that your nodes are running efficiently by removing terminated containers and the images not referenced by any running pods. Add kernel arguments to a set of nodes . Configure an OpenShift Container Platform cluster to have worker nodes at the network edge (remote worker nodes). For information on the challenges of having remote worker nodes in an OpenShift Container Platform cluster and some recommended approaches for managing pods on a remote worker node, see Using remote worker nodes at the network edge . 1.2. About pods A pod is one or more containers deployed together on a node. As a cluster administrator, you can define a pod, assign it to run on a healthy node that is ready for scheduling, and manage. A pod runs as long as the containers are running. You cannot change a pod once it is defined and is running. Some operations you can perform when working with pods are: Read operations As an administrator, you can get information about pods in a project through the following tasks: List pods associated with a project , including information such as the number of replicas and restarts, current status, and age. View pod usage statistics such as CPU, memory, and storage consumption. Management operations The following list of tasks provides an overview of how an administrator can manage pods in an OpenShift Container Platform cluster. Control scheduling of pods using the advanced scheduling features available in OpenShift Container Platform: Node-to-pod binding rules such as pod affinity , node affinity , and anti-affinity . Node labels and selectors . Taints and tolerations . Pod topology spread constraints . Custom schedulers . Configure the descheduler to evict pods based on specific strategies so that the scheduler reschedules the pods to more appropriate nodes. Configure how pods behave after a restart using pod controllers and restart policies . Limit both egress and ingress traffic on a pod . Add and remove volumes to and from any object that has a pod template . A volume is a mounted file system available to all the containers in a pod. Container storage is ephemeral; you can use volumes to persist container data. Enhancement operations You can work with pods more easily and efficiently with the help of various tools and features available in OpenShift Container Platform. The following operations involve using those tools and features to better manage pods. Operation User More information Create and use a horizontal pod autoscaler. Developer You can use a horizontal pod autoscaler to specify the minimum and the maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. Using a horizontal pod autoscaler, you can automatically scale pods . Install and use a vertical pod autoscaler . Administrator and developer As an administrator, use a vertical pod autoscaler to better use cluster resources by monitoring the resources and the resource requirements of workloads. As a developer, use a vertical pod autoscaler to ensure your pods stay up during periods of high demand by scheduling pods to nodes that have enough resources for each pod. Provide access to external resources using device plug-ins. Administrator A device plug-in is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can deploy a device plug-in to provide a consistent and portable solution to consume hardware devices across clusters. Provide sensitive data to pods using the Secret object . Administrator Some applications need sensitive information, such as passwords and usernames. You can use the Secret object to provide such information to an application pod. 1.3. About containers A container is the basic unit of an OpenShift Container Platform application, which comprises the application code packaged along with its dependencies, libraries, and binaries. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Linux container technologies are lightweight mechanisms for isolating running processes and limiting access to only designated resources. As an administrator, You can perform various tasks on a Linux container, such as: Copy files to and from a container . Allow containers to consume API objects . Execute remote commands in a container . Use port forwarding to access applications in a container . OpenShift Container Platform provides specialized containers called Init containers . Init containers run before application containers and can contain utilities or setup scripts not present in an application image. You can use an Init container to perform tasks before the rest of a pod is deployed. Apart from performing specific tasks on nodes, pods, and containers, you can work with the overall OpenShift Container Platform cluster to keep the cluster efficient and the application pods highly available.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/nodes/overview-of-nodes
|
Chapter 4. Creating the traffic violations project in Business Central
|
Chapter 4. Creating the traffic violations project in Business Central For this example, create a new project called traffic-violation . A project is a container for assets such as data objects, DMN assets, and test scenarios. This example project that you are creating is similar to the existing Traffic_Violation sample project in Business Central. Procedure In Business Central, go to Menu Design Projects . Red Hat Decision Manager provides a default space called MySpace . You can use the default space to create and test example projects. Click Add Project . Enter traffic-violation in the Name field. Click Add . Figure 4.1. Add Project window The Assets view of the project opens.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/dmn-gs-new-project-creating-proc_getting-started-decision-services
|
Preface
|
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices External mode
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_z/preface-ibm-z
|
Appendix A. Troubleshooting DNF modules
|
Appendix A. Troubleshooting DNF modules If DNF modules fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows. List the enabled modules: A.1. Ruby If Ruby module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows: List the enabled modules: If the Ruby 2.5 module has already been enabled, perform a module reset: A.2. PostgreSQL If PostgreSQL module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows: List the enabled modules: If the PostgreSQL 10 module has already been enabled, perform a module reset: If a database was previously created using PostgreSQL 10, perform an upgrade: Enable the DNF modules: Install the PostgreSQL upgrade package: Perform the upgrade:
|
[
"dnf module list --enabled",
"dnf module list --enabled",
"dnf module reset ruby",
"dnf module list --enabled",
"dnf module reset postgresql",
"dnf module enable satellite:el8",
"dnf install postgresql-upgrade",
"postgresql-setup --upgrade"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/troubleshooting-dnf-modules_satellite
|
1.3. KVM Guest Virtual Machine Compatibility
|
1.3. KVM Guest Virtual Machine Compatibility Red Hat Enterprise Linux 7 servers have certain support limits. The following URLs explain the processor and memory amount limitations for Red Hat Enterprise Linux: For host systems: https://access.redhat.com/articles/rhel-limits For the KVM hypervisor: https://access.redhat.com/articles/rhel-kvm-limits The following URL lists guest operating systems certified to run on a Red Hat Enterprise Linux KVM host: https://access.redhat.com/articles/973163 Note For additional information on the KVM hypervisor's restrictions and support limits, see Appendix C, Virtualization Restrictions .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_guest_virtual_machine_compatibility-red_hat_enterprise_linux_7_support_limits
|
Chapter 2. Fault tolerant deployments using multiple Prism Elements
|
Chapter 2. Fault tolerant deployments using multiple Prism Elements By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). To improve the fault tolerance of your OpenShift Container Platform cluster, you can specify that these machines be distributed across multiple Nutanix clusters by configuring failure domains. A failure domain represents an additional Prism Element instance that is available to OpenShift Container Platform machine pools during and after installation. 2.1. Installation method and failure domain configuration The OpenShift Container Platform installation method determines how and when you configure failure domains: If you deploy using installer-provisioned infrastructure, you can configure failure domains in the installation configuration file before deploying the cluster. For more information, see Configuring failure domains . You can also configure failure domains after the cluster is deployed. For more information about configuring failure domains post-installation, see Adding failure domains to an existing Nutanix cluster . If you deploy using infrastructure that you manage (user-provisioned infrastructure) no additional configuration is required. After the cluster is deployed, you can manually distribute control plane and compute machines across failure domains. 2.2. Adding failure domains to an existing Nutanix cluster By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). After an OpenShift Container Platform cluster is deployed, you can improve its fault tolerance by adding additional Prism Element instances to the deployment using failure domains. A failure domain represents a single Prism Element instance where new control plane and compute machines can be deployed and existing control plane and compute machines can be distributed. 2.2.1. Failure domain requirements When planning to use failure domains, consider the following requirements: All Nutanix Prism Element instances must be managed by the same instance of Prism Central. A deployment that is comprised of multiple Prism Central instances is not supported. The machines that make up the Prism Element clusters must reside on the same Ethernet network for failure domains to be able to communicate with each other. A subnet is required in each Prism Element that will be used as a failure domain in the OpenShift Container Platform cluster. When defining these subnets, they must share the same IP address prefix (CIDR) and should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. 2.2.2. Adding failure domains to the Infrastructure CR You add failure domains to an existing Nutanix cluster by modifying its Infrastructure custom resource (CR) ( infrastructures.config.openshift.io ). Tip It is recommended that you configure three failure domains to ensure high-availability. Procedure Edit the Infrastructure CR by running the following command: USD oc edit infrastructures.config.openshift.io cluster Configure the failure domains. Example Infrastructure CR with Nutanix failure domains spec: cloudConfig: key: config name: cloud-provider-config #... platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> # ... where: <uuid> Specifies the universally unique identifier (UUID) of the Prism Element. <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <network_uuid> Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. Save the CR to apply the changes. 2.2.3. Distributing control planes across failure domains You distribute control planes across Nutanix failure domains by modifying the control plane machine set custom resource (CR). Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). The control plane machine set custom resource (CR) is in an active state. For more information on checking the control plane machine set custom resource state, see "Additional resources". Procedure Edit the control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api Configure the control plane machine set to use failure domains by adding a spec.template.machines_v1beta1_machine_openshift_io.failureDomains stanza. Example control plane machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: # ... template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3> # ... Save your changes. By default, the control plane machine set propagates changes to your control plane configuration automatically. If the cluster is configured to use the OnDelete update strategy, you must replace your control planes manually. For more information, see "Additional resources". Additional resources Checking the control plane machine set custom resource state Replacing a control plane machine 2.2.4. Distributing compute machines across failure domains You can distribute compute machines across Nutanix failure domains one of the following ways: Editing existing compute machine sets allows you to distribute compute machines across Nutanix failure domains as a minimal configuration update. Replacing existing compute machine sets ensures that the specification is immutable and all your machines are the same. 2.2.4.1. Editing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by using an existing compute machine set, you update the compute machine set with your configuration and then use scaling to replace the existing compute machines. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m Edit the first compute machine set by running the following command: USD oc edit machineset <machine_set_name_1> -n openshift-machine-api Configure the compute machine set to use the first failure domain by updating the following to the spec.template.spec.providerSpec.value stanza. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Note the value of spec.replicas , because you need it when scaling the compute machine set to apply the changes. Save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=<twice_the_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set is 2 , scale the replicas to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=<original_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set was 2 , scale the replicas to 2 . As required, continue to modify machine sets to reference the additional failure domains that are available to the deployment. Additional resources Modifying a compute machine set 2.2.4.2. Replacing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by replacing a compute machine set, you create a new compute machine set with your configuration, wait for the machines that it creates to start, and then delete the old compute machine set. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m Note the names of the existing compute machine sets. Create a YAML file that contains the values for your new compute machine set custom resource (CR) by using one of the following methods: Copy an existing compute machine set configuration into a new file by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml You can edit this YAML file with your preferred text editor. Create a blank YAML file named <new_machine_set_name_1>.yaml with your preferred text editor and include the required values for your new compute machine set. If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create machines with a worker or infra role. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Configure the new compute machine set to use the first failure domain by updating or adding the following to the spec.template.spec.providerSpec.value stanza in the <new_machine_set_name_1>.yaml file. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Save your changes. Create a compute machine set CR by running the following command: USD oc create -f <new_machine_set_name_1>.yaml As required, continue to create compute machine sets to reference the additional failure domains that are available to the deployment. List the machines that are managed by the new compute machine sets by running the following command for each new compute machine set: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s When the new machines are in the Running phase, you can delete the old compute machine sets that do not include the failure domain configuration. When you have verified that the new machines are in the Running phase, delete the old compute machine sets by running the following command for each: USD oc delete machineset <original_machine_set_name_1> -n openshift-machine-api Verification To verify that the compute machine sets without the updated configuration are deleted, list the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s To verify that the compute machines without the updated configuration are deleted, list the machines in your cluster by running the following command: USD oc get -n openshift-machine-api machines Example output while deletion is in progress NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h Example output when deletion is complete NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_from_new_1> -n openshift-machine-api Additional resources Creating a compute machine set on Nutanix
|
[
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config # platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid>",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3>",
"oc describe infrastructures.config.openshift.io cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m",
"oc edit machineset <machine_set_name_1> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h",
"oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=<twice_the_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>",
"oc scale --replicas=<original_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api",
"oc describe infrastructures.config.openshift.io cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml",
"oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>",
"oc create -f <new_machine_set_name_1>.yaml",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s",
"oc delete machineset <original_machine_set_name_1> -n openshift-machine-api",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s",
"oc get -n openshift-machine-api machines",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s",
"oc describe machine <machine_from_new_1> -n openshift-machine-api"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_nutanix/nutanix-failure-domains
|
3.3. NIS
|
3.3. NIS Important Before NIS can be configured as an identity store, NIS itself must be configured for the environment: A NIS server must be fully configured with user accounts set up. The ypbind package must be installed on the local system. This is required for NIS services, but is not installed by default. The portmap and ypbind services are started and enabled to start at boot time. This should be configured as part of the ypbind package installation. 3.3.1. Configuring NIS Authentication from the UI Open the authconfig UI, as in Section 2.2.3, "Launching the authconfig UI" . Select NIS in the User Account Database drop-down menu. Set the information to connect to the NIS server, meaning the NIS domain name and the server host name. If the NIS server is not specified, the authconfig daemon scans for the NIS server. Select the authentication method. NIS allows simple password authentication or Kerberos authentication. Using Kerberos is described in Section 4.3.1, "Configuring Kerberos Authentication from the UI" . 3.3.2. Configuring NIS from the Command Line To use a NIS identity store, use the --enablenis . This automatically uses NIS authentication, unless the Kerberos parameters are explicitly set ( Section 4.3.2, "Configuring Kerberos Authentication from the Command Line" ). The only parameters are to identify the NIS server and NIS domain; if these are not used, then the authconfig service scans the network for NIS servers.
|
[
"authconfig --enablenis --nisdomain=EXAMPLE --nisserver=nis.example.com --update"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring-nis-auth
|
Red Hat JBoss Web Server for OpenShift
|
Red Hat JBoss Web Server for OpenShift Red Hat JBoss Web Server 6.0 Installing and using Red Hat JBoss Web Server for OpenShift Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_for_openshift/index
|
Installing an on-premise cluster with the Agent-based Installer
|
Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.16 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_an_on-premise_cluster_with_the_agent-based_installer/index
|
Chapter 63. Managing IdM certificates using Ansible
|
Chapter 63. Managing IdM certificates using Ansible You can use the ansible-freeipa ipacert module to request, revoke, and retrieve SSL certificates for Identity Management (IdM) users, hosts and services. You can also restore a certificate that has been put on hold. 63.1. Using Ansible to request SSL certificates for IdM hosts, services and users You can use the ansible-freeipa ipacert module to request SSL certificates for Identity Management (IdM) users, hosts and services. They can then use these certificates to authenticate to IdM. Complete this procedure to request a certificate for an HTTP server from an IdM certificate authority (CA) using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Your IdM deployment has an integrated CA. Procedure Generate a certificate-signing request (CSR) for your user, host or service. For example, to use the openssl utility to generate a CSR for the HTTP service running on client.idm.example.com, enter: As a result, the CSR is stored in new.csr . Create your Ansible playbook file request-certificate.yml with the following content: Replace the certificate request with the CSR from new.csr . Request the certificate: Additional resources The cert module in ansible-freeipa upstream docs 63.2. Using Ansible to revoke SSL certificates for IdM hosts, services and users You can use the ansible-freeipa ipacert module to revoke SSL certificates used by Identity Management (IdM) users, hosts and services to authenticate to IdM. Complete this procedure to revoke a certificate for an HTTP server using an Ansible playbook. The reason for revoking the certificate is "keyCompromise". Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in <path_to_certificate> command. In this example, the serial number of the certificate is 123456789. Your IdM deployment has an integrated CA. Procedure Create your Ansible playbook file revoke-certificate.yml with the following content: Revoke the certificate: Additional resources The cert module in ansible-freeipa upstream docs Reason Code in RFC 5280 63.3. Using Ansible to restore SSL certificates for IdM users, hosts, and services You can use the ansible-freeipa ipacert module to restore a revoked SSL certificate previously used by an Identity Management (IdM) user, host or a service to authenticate to IdM. Note You can only restore a certificate that was put on hold. You may have put it on hold because, for example, you were not sure if the private key had been lost. However, now you have recovered the key and as you are certain that no-one has accessed it in the meantime, you want to reinstate the certificate. Complete this procedure to use an Ansible playbook to release a certificate for a service enrolled into IdM from hold. This example describes how to release a certificate for an HTTP service from hold. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Your IdM deployment has an integrated CA. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in path/to/certificate command. In this example, the certificate serial number is 123456789 . Procedure Create your Ansible playbook file restore-certificate.yml with the following content: Run the playbook: Additional resources The cert module in ansible-freeipa upstream docs 63.4. Using Ansible to retrieve SSL certificates for IdM users, hosts, and services You can use the ansible-freeipa ipacert module to retrieve an SSL certificate issued for an Identity Management (IdM) user, host or a service, and store it in a file on the managed node. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in <path_to_certificate> command. In this example, the serial number of the certificate is 123456789, and the file in which you store the retrieved certificate is cert.pem . Procedure Create your Ansible playbook file retrieve-certificate.yml with the following content: Retrieve the certificate: Additional resources The cert module in ansible-freeipa upstream docs
|
[
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN=client.idm.example.com,O=IDM.EXAMPLE.COM'",
"--- - name: Playbook to request a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Request a certificate for a web server ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" state: requested csr: | -----BEGIN CERTIFICATE REQUEST----- MIGYMEwCAQAwGTEXMBUGA1UEAwwOZnJlZWlwYSBydWxlcyEwKjAFBgMrZXADIQBs HlqIr4b/XNK+K8QLJKIzfvuNK0buBhLz3LAzY7QDEqAAMAUGAytlcANBAF4oSCbA 5aIPukCidnZJdr491G4LBE+URecYXsPknwYb+V+ONnf5ycZHyaFv+jkUBFGFeDgU SYaXm/gF8cDYjQI= -----END CERTIFICATE REQUEST----- principal: HTTP/client.idm.example.com register: cert",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/request-certificate.yml",
"--- - name: Playbook to revoke a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Revoke a certificate for a web server ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 revocation_reason: \"keyCompromise\" state: revoked",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/revoke-certificate.yml",
"--- - name: Playbook to restore a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Restore a certificate for a web service ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 state: released",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/restore-certificate.yml",
"--- - name: Playbook to retrieve a certificate and store it locally on the managed node hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Retrieve a certificate and save it to file 'cert.pem' ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 certificate_out: cert.pem state: retrieved",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/retrieve-certificate.yml"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-idm-certificates-using-ansible_configuring-and-managing-idm
|
Chapter 8. Managing Snapshots
|
Chapter 8. Managing Snapshots Red Hat Gluster Storage Snapshot feature enables you to create point-in-time copies of Red Hat Gluster Storage volumes, which you can use to protect data. Users can directly access Snapshot copies which are read-only to recover from accidental deletion, corruption, or modification of the data. Figure 8.1. Snapshot Architecture In the Snapshot Architecture diagram, Red Hat Gluster Storage volume consists of multiple bricks (Brick1 Brick2 etc) which is spread across one or more nodes and each brick is made up of independent thin Logical Volumes (LV). When a snapshot of a volume is taken, it takes the snapshot of the LV and creates another brick. Brick1_s1 is an identical image of Brick1. Similarly, identical images of each brick is created and these newly created bricks combine together to form a snapshot volume. Some features of snapshot are: Crash Consistency A crash consistent snapshot is captured at a particular point-in-time. When a crash consistent snapshot is restored, the data is identical as it was at the time of taking a snapshot. Note Currently, application level consistency is not supported. Online Snapshot Snapshot is an online snapshot hence the file system and its associated data continue to be available for the clients even while the snapshot is being taken. Barrier To guarantee crash consistency some of the file operations are blocked during a snapshot operation. These file operations are blocked till the snapshot is complete. All other file operations are passed through. There is a default time-out of 2 minutes, within that time if snapshot is not complete then these file operations are unbarriered. If the barrier is unbarriered before the snapshot is complete then the snapshot operation fails. This is to ensure that the snapshot is in a consistent state. Note Taking a snapshot of a Red Hat Gluster Storage volume that is hosting the Virtual Machine Images is not recommended. Taking a Hypervisor assisted snapshot of a virtual machine would be more suitable in this use case. 8.1. Prerequisites Before using this feature, ensure that the following prerequisites are met: Snapshot is based on thinly provisioned LVM. Ensure the volume is based on LVM2. Red Hat Gluster Storage is supported on Red Hat Enterprise Linux 6.7 and later, Red Hat Enterprise Linux 7.1 and later, and on Red Hat Enterprise Linux 8.2 and later versions. All these versions of Red Hat Enterprise Linux is based on LVM2 by default. For more information, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinprovisioned_volumes.html Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Each brick must be independent thinly provisioned logical volume(LV). All bricks must be online for snapshot creation. The logical volume which contains the brick must not contain any data other than the brick. Linear LVM and thin LV are supported with Red Hat Gluster Storage 3.4 and later. For more information, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/logical_volume_manager_administration/index#LVM_components Recommended Setup The recommended setup for using Snapshot is described below. In addition, you must ensure to read Chapter 19, Tuning for Performance for enhancing snapshot performance: For each volume brick, create a dedicated thin pool that contains the brick of the volume and its (thin) brick snapshots. With the current thin-p design, avoid placing the bricks of different Red Hat Gluster Storage volumes in the same thin pool, as this reduces the performance of snapshot operations, such as snapshot delete, on other unrelated volumes. The recommended thin pool chunk size is 256KB. There might be exceptions to this in cases where we have a detailed information of the customer's workload. The recommended pool metadata size is 0.1% of the thin pool size for a chunk size of 256KB or larger. In special cases, where we recommend a chunk size less than 256KB, use a pool metadata size of 0.5% of thin pool size. For Example To create a brick from device /dev/sda1. Create a physical volume(PV) by using the pvcreate command. Use the correct dataalignment option based on your device. For more information, Section 19.2, "Brick Configuration" Create a Volume Group (VG) from the PV using the following command: Create a thin-pool using the following command: A thin pool of size 1 TB is created, using a chunksize of 256 KB. Maximum pool metadata size of 16 G is used. Create a thinly provisioned volume from the previously created pool using the following command: Create a file system (XFS) on this. Use the recommended options to create the XFS file system on the thin LV. For example, Mount this logical volume and use the mount path as the brick.
|
[
"pvcreate /dev/sda1",
"vgcreate dummyvg /dev/sda1",
"lvcreate --size 1T --thin dummyvg/dummypool --chunksize 256k --poolmetadatasize 16G --zero n",
"lvcreate --virtualsize 1G --thin dummyvg/dummypool --name dummylv",
"mkfs.xfs -f -i size=512 -n size=8192 /dev/dummyvg/dummylv",
"mount /dev/dummyvg/dummylv /mnt/brick1"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_snapshots
|
Chapter 44. Kernel
|
Chapter 44. Kernel Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7.3 introduced the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) criu rebased to version 3.5 Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.5, the criu packages have been upgraded to upstream version 3.5, which provides a number of bug fixes and enhancements. In addition, support for IBM Z and the 64-bit ARM architecture has been added. (BZ# 1400230 , BZ#1464596) kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) kexec fast reboot as a Technology Preview As a Technology Preview, this update adds the kexec fast reboot feature, which makes the reboot significantly faster. To use this feature, you must load the kexec kernel manually, and then reboot the operating system. It is not possible to make kexec fast reboot as the default reboot action. Special case is using kexec fast reboot for Anaconda . It still does not enable to make kexec fast reboot default. However, when used with Anaconda , the operating system can automatically use kexec fast reboot after the installation is complete in case that user boots kernel with the anaconda option. To schedule a kexec reboot, use the inst.kexec command on the kernel command line, or include a reboot --kexec line in the Kickstart file. (BZ#1464377) Unprivileged access to name spaces can be enabled as a Technology Preview You can now set the namespace.unpriv_enable kernel command-line option if required, as a Technology Preview. The default setting is off. When set to 1 , issuing a call to the clone() function with the flag CLONE_NEWNS as an unprivileged user no longer returns an error and allows the operation. However, to enable the unprivileged access to name spaces, the CAP_SYS_ADMIN flag has to be set in some user name space to create a mount name space. (BZ#1350553) SCSI-MQ as a Technology Preview in the qla2xxx driver The qla2xxx& driver updated in Red Hat Enterprise Linux 7.4 can now enable the use of SCSI-MQ (multiqueue) with the ql2xmqsupport=1 module parameter. The default value is 0 (disabled). The SCSI-MQ functinality is provided as a Technology Preview when used with the qla2xxx driver. Note that a recent performance testing at Red Hat with async IO over Fibre Channel adapters using SCSI-MQ has shown significant performance degradation under certain conditions. A fix is being tested but was not ready in time for Red Hat Enterprise Linux 7.4 General Availability. (BZ#1414957) NVMe over Fibre Channel is now available as a Technology Preview The NVMe over Fibre Channel transport type is now available as a Technology Preview. NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux. To enable NVMe over Fibre Channel in the lpfc driver, edit the /etc/modprobe.d/lpfc.conf file and add one or both of the following options: To enable the NVMe mode of operation, add the lpfc_enable_fc4_type=3 option. To enable target mode, add the lpfc_enable_nvmet=<wwpn list> option, where <wwpn list> is a comma-separated list of World-Wide Port Name (WWPN) values with the 0x prefix. To configure an NVMe target, use the nvmetcli utility. NVMe over Fibre Channel provides a higher-performance, lower-latency I/O protocol over existing Fibre Channel infrastructure. This is especially important with solid-state storage arrays, because it allows the performance benefits of NVMe storage to be passed through the fabric transport, rather than being encapsulated in a different protocol, SCSI. In Red Hat Enterprise Linux 7.5, NVMe over Fibre Channel is available only with Broadcom 32Gbit adapters, which use the lpfc driver. (BZ# 1387768 , BZ#1454386) perf cqm has been replaced by resctrl The Intel Cache Allocation Technology (CAT) was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview. However, the perf cqm tool did not work correctly due to an incompatibility between perf infrastructure and Cache Quality of Service Monitoring (CQM) hardware support. Consequently, multiple problems occurred when using perf cqm . These problems included most notably: perf cqm did not support the group of tasks which is allocated using resctrl perf cqm gave random and inaccurate data due to several problems with recycling perf cqm did not provide enough support when running different kinds of events together (the different events are, for example, tasks, system-wide, and cgroup events) perf cqm provided only partial support for cgroup events The partial support for cgroup events did not work in cases with a hierarchy of cgroup events, or when monitoring a task in a cgroup and the cgroup together Monitoring tasks for the lifetime caused perf overhead perf cqm reported the aggregate cache occupancy or memory bandwidth over all sockets, while in most cloud and VMM-bases use cases the individual per-socket usage is needed With this update, perf cqm has been replaced by the approach based on the resctrl file system, which address all of the aforementioned problems. (BZ# 1457533 , BZ#1288964)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_kernel
|
Chapter 73. Kubernetes Pods
|
Chapter 73. Kubernetes Pods Since Camel 2.17 Both producer and consumer are supported The Kubernetes Pods component is one of the Kubernetes Components which provides a producer to execute Kubernetes Pods operations and a consumer to consume events related to Pod Objects. 73.1. Dependencies When using kubernetes-pods with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 73.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 73.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 73.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 73.3. Component Options The Kubernetes Pods component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 73.4. Endpoint Options The Kubernetes Pods endpoint is configured using URI syntax: with the following path and query parameters: 73.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 73.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 73.5. Message Headers The Kubernetes Pods component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesPodsLabels (producer) Constant: KUBERNETES_PODS_LABELS The pod labels. Map CamelKubernetesPodName (producer) Constant: KUBERNETES_POD_NAME The pod name. String CamelKubernetesPodSpec (producer) Constant: KUBERNETES_POD_SPEC The spec for a pod. PodSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 73.6. Supported producer operation listPods listPodsByLabels getPod createPod updatePod deletePod 73.7. Kubernetes Pods Producer Examples listPods: this operation list the pods on a kubernetes cluster. from("direct:list"). toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods"). to("mock:result"); This operation returns a List of Pods from your cluster. listPodsByLabels: this operation list the pods by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); } }); toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels"). to("mock:result"); This operation returns a List of Pods from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 73.8. Kubernetes Pods Consumer Example fromF("kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Pod pod = exchange.getIn().getBody(Pod.class); log.info("Got event with configmap name: " + pod.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the pod test. 73.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-pods:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); } }); toF(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels\"). to(\"mock:result\");",
"fromF(\"kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Pod pod = exchange.getIn().getBody(Pod.class); log.info(\"Got event with configmap name: \" + pod.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-pods-component-starter
|
4.154. libxklavier
|
4.154. libxklavier 4.154.1. RHBA-2012:0005 - libxklavier bug fix update An updated libxklavier package that fixes one bug is now available for Red Hat Enterprise Linux 6. The libxklavier library provides a high-level API for the X Keyboard Extension (XKB) that allows extended keyboard control. This library supports X.Org and other commercial implementations of the X Window system. The library is useful for creating XKB-related software, such as layout indicators. This update fixes the following bug: BZ# 767267 Due to the way how the NoMachine NX Free Edition server implements XInput support, an attempt to log into the server using an NX or VNC client triggered an XInput error that was handled incorrectly by the libxklavier library. Consequently, the GNOME Settings Daemon (gnome-settings-daemon) was terminated with signal 6 (SIGABRT). To resolve this problem, the XInput error handling routine in the libxklavier library has been modified. The library now ignores this error and gnome-settings-daemon runs correctly under these conditions. All users of libxklavier are advised to upgrade to this updated package, which fixes this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libxklavier
|
Chapter 1. The Cache API
|
Chapter 1. The Cache API The Cache interface provides simple methods for the addition, retrieval and removal of entries, which includes atomic mechanisms exposed by the JDK's ConcurrentMap interface. How entries are stored depends on the cache mode in use. For example, an entry may be replicated to a remote node or an entry may be looked up in a cache store. The Cache API is used in the same manner as the JDK Map API for basic tasks. This simplifies the process of migrating from Map-based, simple in-memory caches to Red Hat JBoss Data Grid's cache. Note This API is not available in JBoss Data Grid's Remote Client-Server Mode Report a bug 1.1. Using the ConfigurationBuilder API to Configure the Cache API Red Hat JBoss Data Grid uses a ConfigurationBuilder API to configure caches. Caches are configured programmatically using the ConfigurationBuilder helper object. The following is an example of a synchronously replicated cache configured programmatically using the ConfigurationBuilder API : Procedure 1.1. Programmatic Cache Configuration In the first line of the configuration, a new cache configuration object (named c ) is created using the ConfigurationBuilder . Configuration c is assigned the default values for all cache configuration options except the cache mode, which is overridden and set to synchronous replication ( REPL_SYNC ). In the second line of the configuration, a new variable (of type String ) is created and assigned the value repl . In the third line of the configuration, the cache manager is used to define a named cache configuration for itself. This named cache configuration is called repl and its configuration is based on the configuration provided for cache configuration c in the first line. In the fourth line of the configuration, the cache manager is used to obtain a reference to the unique instance of the repl that is held by the cache manager. This cache instance is now ready to be used to perform operations to store and retrieve data. Note JBoss EAP includes its own underlying JMX. This can cause a collision when using the sample code with JBoss EAP and display an error such as org.infinispan.jmx.JmxDomainConflictException: Domain already registered org.infinispan . To avoid this, configure global configuration as follows: Report a bug
|
[
"Configuration c = new ConfigurationBuilder().clustering().cacheMode(CacheMode.REPL_SYNC).build(); String newCacheName = \"repl\"; manager.defineConfiguration(newCacheName, c); Cache<String, String> cache = manager.getCache(newCacheName);",
"GlobalConfiguration glob = new GlobalConfigurationBuilder() .clusteredDefault() .globalJmxStatistics() .allowDuplicateDomains(true) .enable() .build();"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/chap-the_cache_api
|
4.88. ibus-anthy
|
4.88. ibus-anthy 4.88.1. RHBA-2011:1208 - ibus-anthy bug fix update An updated ibus-anthy package that fixes a bug is now available for Red Hat Enterprise Linux 6. The ibus-anthy package contains the Anthy engine, which provides an input method for Japanese based on the IBus (Intelligent Input Bus) platform. Bug Fix BZ# 661597 Previously, when changing the Candidate Window Page Size setting of Other under the General tab, the im-chooser application had to be restarted for the changes to take effect. This problem has been fixed and the changes made to Candidate Window Page Size now apply immediately. All users of ibus-anthy are advised to upgrade to this updated package, which resolves this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/ibus-anthy
|
Chapter 62. KafkaExporterTemplate schema reference
|
Chapter 62. KafkaExporterTemplate schema reference Used in: KafkaExporterSpec Property Description deployment Template for Kafka Exporter Deployment . DeploymentTemplate pod Template for Kafka Exporter Pods . PodTemplate service The service property has been deprecated. The Kafka Exporter service has been removed. Template for Kafka Exporter Service . ResourceTemplate container Template for the Kafka Exporter container. ContainerTemplate serviceAccount Template for the Kafka Exporter service account. ResourceTemplate
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaexportertemplate-reference
|
Chapter 2. Top new features
|
Chapter 2. Top new features This section provides an overview of the top new features in this release of Red Hat OpenStack Platform. 2.1. Bare Metal Service This section outlines the top new features for the Bare Metal (ironic) service. Provision hardware before deploying the overcloud In Red Hat OpenStack Platform 17.0, you must provision the bare metal nodes and the physical networks resources for the overcloud before deploying the overcloud. The openstack overcloud deploy command no longer provisions the hardware. For more information, see Provisioning and deploying your overcloud . New network definition file format In Red Hat OpenStack Platform 17.0, you configure your network definition files by using ansible jinja2 templates instead of heat templates. For more information, see Configuring overcloud networking . Whole disk images are the default overcloud image The default overcloud-full flat partition images have been updated to overcloud-hardened-uefi-full whole disk images. The whole disk image is a single compressed qcow2 image that contains the following elements: A partition layout containing UEFI boot, legacy boot, and a root partition. The root partition contains a single lvm group with logical volumes of different sizes that are mounted at / , /tmp , /var , /var/log , and so on. When you deploy a whole-disk image, ironic-python-agent copies the whole image to the disk without any bootloader or partition changes. UEFI Boot by default The default boot mode of bare metal nodes is now UEFI boot, because the Legacy BIOS boot feature is unavailable in new hardware. 2.2. Block Storage This section outlines the top new features for the Block Storage (cinder) service. Support for automating multipath deployments You can specify the location of your multipath configuration file for your overcloud deployment. Project-specific default volume types For complex deployments, project administrators can define a default volume type for each project (tenant). If you create a volume and do not specify a volume type, then Block Storage uses the default volume type. You can use the Block Storage (cinder) configuration file to define the general default volume type that applies to all your projects (tenants). But if your deployment uses project-specific volume types, ensure that you define default volume types for each project. In this case, Block Storage uses the project-specific volume type instead of the general default volume type. For more information, see Defining a project-specific default volume type . 2.3. Ceph Storage This section outlines the top new features for Ceph Storage. Greater security for Ceph client Shared Files Systems service (manila) permissions The Shared File Systems service CephFS drivers (native CephFS and CephFS through NFS) now interact with Ceph clusters through the Ceph Manager Volumes interface. The Ceph client user configured for the Shared Files Systems service no longer needs to be as permissive. This feature makes Ceph client user permissions for the Shared Files Systems service more secure. Ceph Object Gateway (RGW) replaces Object Storage service (swift) When you use Red Hat OpenStack Platform (RHOSP) director to deploy Ceph, director enables Ceph Object Gateway (RGW) object storage, which replaces the Object Storage service (swift). All other services that normally use the Object Storage service can start using RGW instead without additional configuration. Red Hat Ceph Storage cluster deployment in new environments In new environments, the Red Hat Ceph Storage cluster is deployed first, before the overcloud, using director and the openstack overcloud ceph deploy command. You now use cephadm to deploy Ceph, because deployment with ceph-ansible is deprecated. For more information about deploying Ceph, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . This document replaces Deploying an overcloud with containerized Red Hat Ceph . A Red Hat Ceph Storage cluster that you deployed without RHOSP director is also supported. For more information, see Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster . Support for creating shares from snapshots You can create a new share from a snapshot to restore snapshots by using the Shared File Systems service (manila) CephFS back ends: native CephFS and CephFS through NFS. 2.4. Compute This section outlines the top new features for the Compute service. Support for attaching and detaching SR-IOV devices to an instance Cloud users can create a port that has an SR-IOV vNIC, and attach the port to an instance when there is a free SR-IOV device on the host on the appropriate physical network, and the instance has a free PCIe slot. For more information, see Attaching a port to an instance . Support for creating an instance with NUMA-affinity on the port Cloud users can create a port that has a NUMA affinity policy, and attach the port to an instance. For more information, see Creating an instance with NUMA affinity on the port . Q35 is the default machine type The default machine type for each host architecture is Q35 ( pc-q35-rhel9.0.0 ) for new Red Hat OpenStack Platform 17.0 deployments. The Q35 machine type provides several benefits and improvements, including live migration of instances between different RHEL 9.x minor releases, and native PCIe hotplug which is faster than the ACPI hotplug used by the i440fx machine type. 2.5. Networking This section outlines the top new features for the Networking service. Active/Active clustered database service model improves OVS database read performance and fault tolerance Starting in RHOSP 17.0, RHOSP ML2/OVN deployments use a clustered database service model that applies the Raft consensus algorithm to enhance performance of OVS database protocol traffic and provide faster, more reliable failover handling. The clustered database service model replaces the pacemaker-based, active/backup model. A clustered database operates on a cluster of at least three database servers on different hosts. Servers use the Raft consensus algorithm to synchronize writes and share network traffic continuously across the cluster. The cluster elects one server as the leader. All servers in the cluster can handle database read operations, which mitigates potential bottlenecks on the control plane. Write operations are handled by the cluster leader. If a server fails, a new cluster leader is elected and the traffic is redistributed among the remaining operational servers. The clustered database service model handles failovers more efficiently than the pacemaker-based model did. This mitigates related downtime and complications that can occur with longer failover times. The leader election process requires a majority, so the fault tolerance capacity is limited by the highest odd number in the cluster. For example, a three-server cluster continues to operate if one server fails. A five-server cluster tolerates up to two failures. Increasing the number of servers to an even number does not increase fault tolerance. For example, a four-server cluster cannot tolerate more failures than a three-server cluster. Most RHOSP deployments use three servers. Clusters larger than five servers also work, with every two added servers allowing the cluster to tolerate an additional failure, but write performance decreases. The clustered database model is the default in RHOSP 17.0 deployments. You do not need to perform any configuration steps. Designate DNSaaS In Red Hat OpenStack Platform (RHOSP) 17.0, the DNS service (designate) is now fully supported. Designate is an official OpenStack project that provides DNS-as-a-Service (DNSaaS) implementation and enables you to manage DNS records and zones in the cloud. The DNS service provides a REST API, and is integrated with the RHOSP Identity service (keystone) for user management. Using RHOSP director you can deploy BIND instances to contain DNS records, or you can integrate the DNS service into an existing BIND infrastructure. (Integration with an existing BIND infrastructure is a technical preview feature.) In addition, director can configure DNS service integration with the RHOSP Networking service (neutron) to automatically create records for virtual machine instances, network ports, and floating IPs. For more information, see Using Designate for DNS-as-a-Service . 2.6. Validation Framework This section outlines the top new features for the Validation Framework. User-created validations through the CLI In Red Hat OpenStack Platform (RHOSP) 17.0, you can create your own personalized validation with the validation init command. Execution of the command results in a template for a new validation. You can edit the new validation role to suit your requirements. 2.7. Technology previews This section provides an overview of the top new technology previews in this release of Red Hat OpenStack Platform. Note For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope . Border Gateway Protocol (BGP) In Red Hat OpenStack Platform (RHOSP) 17.0, a technology preview is available for Border Gateway Protocol (BGP) to route the control plane, floating IPs, and workloads in provider networks. By using BGP advertisements, you do not need to configure static routes in the fabric, and RHOSP can be deployed in a pure Layer 3 data center. RHOSP uses Free Range Routing (FRR) as the dynamic routing solution to advertise and withdraw routes to control plane endpoints as well as to VMs in provider networks and Floating IPs. Integrating existing BIND servers with the DNS service In Red Hat OpenStack Platform (RHOSP) 17.0, a technology preview is available for integrating the RHOSP DNS service (designate) with an existing BIND infrastructure. For more information, see Configuring existing BIND servers for the DNS service .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/release_notes/chap-top-new-features_rhosp-relnotes
|
Index
|
Index A ACPI configuring, Configuring ACPI For Use with Integrated Fence Devices APC power switch over SNMP fence device , Fence Device Parameters APC power switch over telnet/SSH fence device , Fence Device Parameters B behavior, HA resources, HA Resource Behavior Brocade fabric switch fence device , Fence Device Parameters C CISCO MDS fence device , Fence Device Parameters Cisco UCS fence device , Fence Device Parameters cluster administration, Before Configuring the Red Hat High Availability Add-On , Managing Red Hat High Availability Add-On With Conga , Managing Red Hat High Availability Add-On With ccs , Managing Red Hat High Availability Add-On With Command Line Tools diagnosing and correcting problems, Diagnosing and Correcting Problems in a Cluster , Diagnosing and Correcting Problems in a Cluster starting, stopping, restarting, Starting and Stopping the Cluster Software cluster administration, Before Configuring the Red Hat High Availability Add-On , Managing Red Hat High Availability Add-On With Conga , Managing Red Hat High Availability Add-On With ccs , Managing Red Hat High Availability Add-On With Command Line Tools adding cluster node, Adding a Member to a Running Cluster , Adding a Member to a Running Cluster compatible hardware, Compatible Hardware configuration validation, Configuration Validation configuring ACPI, Configuring ACPI For Use with Integrated Fence Devices configuring iptables, Enabling IP Ports considerations for using qdisk, Considerations for Using Quorum Disk considerations for using quorum disk, Considerations for Using Quorum Disk deleting a cluster, Starting, Stopping, Restarting, and Deleting Clusters deleting a node from the configuration; adding a node to the configuration , Deleting or Adding a Node diagnosing and correcting problems in a cluster, Diagnosing and Correcting Problems in a Cluster , Diagnosing and Correcting Problems in a Cluster displaying HA services with clustat, Displaying HA Service Status with clustat enabling IP ports, Enabling IP Ports general considerations, General Configuration Considerations joining a cluster, Causing a Node to Leave or Join a Cluster , Causing a Node to Leave or Join a Cluster leaving a cluster, Causing a Node to Leave or Join a Cluster , Causing a Node to Leave or Join a Cluster managing cluster node, Managing Cluster Nodes , Managing Cluster Nodes managing high-availability services, Managing High-Availability Services , Managing High-Availability Services managing high-availability services, freeze and unfreeze, Managing HA Services with clusvcadm , Considerations for Using the Freeze and Unfreeze Operations network switches and multicast addresses, Multicast Addresses NetworkManager, Considerations for NetworkManager rebooting cluster node, Rebooting a Cluster Node removing cluster node, Deleting a Member from a Cluster restarting a cluster, Starting, Stopping, Restarting, and Deleting Clusters ricci considerations, Considerations for ricci SELinux, Red Hat High Availability Add-On and SELinux starting a cluster, Starting, Stopping, Restarting, and Deleting Clusters , Starting and Stopping a Cluster starting, stopping, restarting a cluster, Starting and Stopping the Cluster Software stopping a cluster, Starting, Stopping, Restarting, and Deleting Clusters , Starting and Stopping a Cluster updating a cluster configuration using cman_tool version -r, Updating a Configuration Using cman_tool version -r updating a cluster configuration using scp, Updating a Configuration Using scp updating configuration, Updating a Configuration virtual machines, Configuring Virtual Machines in a Clustered Environment cluster configuration, Configuring Red Hat High Availability Add-On With Conga , Configuring Red Hat High Availability Add-On With the ccs Command , Configuring Red Hat High Availability Manually deleting or adding a node, Deleting or Adding a Node updating, Updating a Configuration cluster resource relationships, Parent, Child, and Sibling Relationships Among Resources cluster resource status check, Modifying and Enforcing Cluster Service Resource Actions cluster resource types, Considerations for Configuring HA Services cluster service managers configuration, Adding a Cluster Service to the Cluster , Adding a Cluster Service to the Cluster , Adding a Cluster Service to the Cluster cluster services, Adding a Cluster Service to the Cluster , Adding a Cluster Service to the Cluster , Adding a Cluster Service to the Cluster (see also adding to the cluster configuration) cluster software configuration, Configuring Red Hat High Availability Add-On With Conga , Configuring Red Hat High Availability Add-On With the ccs Command , Configuring Red Hat High Availability Manually configuration HA service, Considerations for Configuring HA Services Configuring High Availability LVM, High Availability LVM (HA-LVM) Conga accessing, Configuring Red Hat High Availability Add-On Software consensus value, The consensus Value for totem in a Two-Node Cluster D Dell DRAC 5 fence device , Fence Device Parameters Dell iDRAC fence device , Fence Device Parameters E Eaton network power switch, Fence Device Parameters Egenera BladeFrame fence device , Fence Device Parameters Emerson network power switch fence device , Fence Device Parameters ePowerSwitch fence device , Fence Device Parameters F failover timeout, Modifying and Enforcing Cluster Service Resource Actions features, new and changed, New and Changed Features feedback, Feedback fence agent fence_apc, Fence Device Parameters fence_apc_snmp, Fence Device Parameters fence_bladecenter, Fence Device Parameters fence_brocade, Fence Device Parameters fence_cisco_mds, Fence Device Parameters fence_cisco_ucs, Fence Device Parameters fence_drac5, Fence Device Parameters fence_eaton_snmp, Fence Device Parameters fence_egenera, Fence Device Parameters fence_emerson, Fence Device Parameters fence_eps, Fence Device Parameters fence_hpblade, Fence Device Parameters fence_ibmblade, Fence Device Parameters fence_idrac, Fence Device Parameters fence_ifmib, Fence Device Parameters fence_ilo, Fence Device Parameters fence_ilo2, Fence Device Parameters fence_ilo3, Fence Device Parameters fence_ilo3_ssh, Fence Device Parameters fence_ilo4, Fence Device Parameters fence_ilo4_ssh, Fence Device Parameters fence_ilo_moonshot, Fence Device Parameters fence_ilo_mp, Fence Device Parameters fence_ilo_ssh, Fence Device Parameters fence_imm, Fence Device Parameters fence_intelmodular, Fence Device Parameters fence_ipdu, Fence Device Parameters fence_ipmilan, Fence Device Parameters fence_kdump, Fence Device Parameters fence_mpath, Fence Device Parameters fence_rhevm, Fence Device Parameters fence_rsb, Fence Device Parameters fence_scsi, Fence Device Parameters fence_virt, Fence Device Parameters fence_vmware_soap, Fence Device Parameters fence_wti, Fence Device Parameters fence_xvm, Fence Device Parameters fence device APC power switch over SNMP, Fence Device Parameters APC power switch over telnet/SSH, Fence Device Parameters Brocade fabric switch, Fence Device Parameters Cisco MDS, Fence Device Parameters Cisco UCS, Fence Device Parameters Dell DRAC 5, Fence Device Parameters Dell iDRAC, Fence Device Parameters Eaton network power switch, Fence Device Parameters Egenera BladeFrame, Fence Device Parameters Emerson network power switch, Fence Device Parameters ePowerSwitch, Fence Device Parameters Fence virt (fence_xvm/Multicast Mode), Fence Device Parameters Fence virt (Serial/VMChannel Mode), Fence Device Parameters Fujitsu Siemens Remoteview Service Board (RSB), Fence Device Parameters HP BladeSystem, Fence Device Parameters HP iLO, Fence Device Parameters HP iLO MP, Fence Device Parameters HP iLO over SSH, Fence Device Parameters HP iLO2, Fence Device Parameters HP iLO3, Fence Device Parameters HP iLO3 over SSH, Fence Device Parameters HP iLO4, Fence Device Parameters HP iLO4 over SSH, Fence Device Parameters HP Moonshot iLO, Fence Device Parameters IBM BladeCenter, Fence Device Parameters IBM BladeCenter SNMP, Fence Device Parameters IBM Integrated Management Module, Fence Device Parameters IBM iPDU, Fence Device Parameters IF MIB, Fence Device Parameters Intel Modular, Fence Device Parameters IPMI LAN, Fence Device Parameters multipath persistent reservation fencing, Fence Device Parameters RHEV-M fencing, Fence Device Parameters SCSI fencing, Fence Device Parameters VMware (SOAP Interface), Fence Device Parameters WTI power switch, Fence Device Parameters Fence virt fence device , Fence Device Parameters fence_apc fence agent, Fence Device Parameters fence_apc_snmp fence agent, Fence Device Parameters fence_bladecenter fence agent, Fence Device Parameters fence_brocade fence agent, Fence Device Parameters fence_cisco_mds fence agent, Fence Device Parameters fence_cisco_ucs fence agent, Fence Device Parameters fence_drac5 fence agent, Fence Device Parameters fence_eaton_snmp fence agent, Fence Device Parameters fence_egenera fence agent, Fence Device Parameters fence_emerson fence agent, Fence Device Parameters fence_eps fence agent, Fence Device Parameters fence_hpblade fence agent, Fence Device Parameters fence_ibmblade fence agent, Fence Device Parameters fence_idrac fence agent, Fence Device Parameters fence_ifmib fence agent, Fence Device Parameters fence_ilo fence agent, Fence Device Parameters fence_ilo2 fence agent, Fence Device Parameters fence_ilo3 fence agent, Fence Device Parameters fence_ilo3_ssh fence agent, Fence Device Parameters fence_ilo4 fence agent, Fence Device Parameters fence_ilo4_ssh fence agent, Fence Device Parameters fence_ilo_moonshot fence agent, Fence Device Parameters fence_ilo_mp fence agent, Fence Device Parameters fence_ilo_ssh fence agent, Fence Device Parameters fence_imm fence agent, Fence Device Parameters fence_intelmodular fence agent, Fence Device Parameters fence_ipdu fence agent, Fence Device Parameters fence_ipmilan fence agent, Fence Device Parameters fence_kdump fence agent, Fence Device Parameters fence_mpath fence agent, Fence Device Parameters fence_rhevm fence agent, Fence Device Parameters fence_rsb fence agent, Fence Device Parameters fence_scsi fence agent, Fence Device Parameters fence_virt fence agent, Fence Device Parameters fence_vmware_soap fence agent, Fence Device Parameters fence_wti fence agent, Fence Device Parameters fence_xvm fence agent, Fence Device Parameters Fujitsu Siemens Remoteview Service Board (RSB) fence device, Fence Device Parameters G general considerations for cluster administration, General Configuration Considerations H HA service configuration overview, Considerations for Configuring HA Services hardware compatible, Compatible Hardware HP Bladesystem fence device , Fence Device Parameters HP iLO fence device, Fence Device Parameters HP iLO MP fence device , Fence Device Parameters HP iLO over SSH fence device, Fence Device Parameters HP iLO2 fence device, Fence Device Parameters HP iLO3 fence device, Fence Device Parameters HP iLO3 over SSH fence device, Fence Device Parameters HP iLO4 fence device, Fence Device Parameters HP iLO4 over SSH fence device, Fence Device Parameters HP Moonshot iLO fence device, Fence Device Parameters I IBM BladeCenter fence device , Fence Device Parameters IBM BladeCenter SNMP fence device , Fence Device Parameters IBM Integrated Management Module fence device , Fence Device Parameters IBM iPDU fence device , Fence Device Parameters IF MIB fence device , Fence Device Parameters integrated fence devices configuring ACPI, Configuring ACPI For Use with Integrated Fence Devices Intel Modular fence device , Fence Device Parameters introduction, Introduction other Red Hat Enterprise Linux documents, Introduction IP ports enabling, Enabling IP Ports IPMI LAN fence device , Fence Device Parameters iptables configuring, Enabling IP Ports iptables firewall, Configuring the iptables Firewall to Allow Cluster Components L LVM, High Availability, High Availability LVM (HA-LVM) M multicast addresses considerations for using with network switches and multicast addresses, Multicast Addresses multicast traffic, enabling, Configuring the iptables Firewall to Allow Cluster Components multipath persistent reservation fence device , Fence Device Parameters N NetworkManager disable for use with cluster, Considerations for NetworkManager nfsexport resource, configuring, Configuring nfsexport and nfsserver Resources nfsserver resource, configuring, Configuring nfsexport and nfsserver Resources O overview features, new and changed, New and Changed Features P parameters, fence device, Fence Device Parameters parameters, HA resources, HA Resource Parameters Q qdisk considerations for using, Considerations for Using Quorum Disk quorum disk considerations for using, Considerations for Using Quorum Disk R relationships cluster resource, Parent, Child, and Sibling Relationships Among Resources RHEV-M fencing, Fence Device Parameters ricci considerations for cluster administration, Considerations for ricci S SCSI fencing, Fence Device Parameters SELinux configuring, Red Hat High Availability Add-On and SELinux status check, cluster resource, Modifying and Enforcing Cluster Service Resource Actions T tables fence devices, parameters, Fence Device Parameters HA resources, parameters, HA Resource Parameters timeout failover, Modifying and Enforcing Cluster Service Resource Actions tools, command line, Command Line Tools Summary totem tag consensus value, The consensus Value for totem in a Two-Node Cluster troubleshooting diagnosing and correcting problems in a cluster, Diagnosing and Correcting Problems in a Cluster , Diagnosing and Correcting Problems in a Cluster types cluster resource, Considerations for Configuring HA Services V validation cluster configuration, Configuration Validation virtual machines, in a cluster, Configuring Virtual Machines in a Clustered Environment VMware (SOAP Interface) fence device , Fence Device Parameters W WTI power switch fence device , Fence Device Parameters
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ix01
|
8.40. fcoe-utils
|
8.40. fcoe-utils 8.40.1. RHBA-2013:1637 - fcoe-utils bug fix and enhancement update Updated fcoe-utils, libhbalinux, libhbaapi, and lldpad packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The fcoe-utils packages provide Fibre Channel over Ethernet (FCoE) utilities, such as the fcoeadm command-line utility for configuring FCoE interfaces, and the fcoemon service to configure DCB Ethernet QOS filters. Note The libhbalinux packages contain the Host Bus Adapter API (HBAAPI) vendor library which uses standard kernel interfaces to obtain information about Fiber Channel Host Buses (FC HBA) in the system. The libhbaapi library is the Host Bus Adapter (HBA) API library for Fibre Channel and Storage Area Network (SAN) resources. It contains a unified API that programmers can use to access, query, observe, and modify SAN and Fibre Channel services. The lldpad packages provide a user-space daemon and a configuration utility for Intel's Link Layer Discovery Protocol (LLDP) agent with Enhanced Ethernet support. The fcoe-utils packages have been upgraded to upstream version 1.0.28, which provides a number of bug fixes and enhancements over the version, including support for the virtual N_Port to virtual N_Port (VN2VN) protocol. Moreover, the fcoeadm utility now supports listing Fibre Channel Forwarder (FCF) and Link Error Status Block (LESB) statistics, and also support for the fcoe_sysfs kernel interface has been added. Additionally, documentation updates, a new website, mailing lists, and various minor bug fixes are included in this rebase. (BZ# 829793 , BZ# 829797 ) The libhbalinux packages have been upgraded to upstream version 1.0.16, which provides a number of bug fixes and enhancements over the version. Also, the documentation has been updated and it now directs the user to the new mailing lists. (BZ# 829810 ) The libhbaapi packages have been upgraded to upstream version 2.2.9, which provides a number of enhancements over the version. Also, the documentation has been updated and it now directs the user to the new mailing lists. (BZ# 829815 ) The lldpad packages have been upgraded to upstream version 0.9.46, which provides a number of bug fixes and enhancements over the version, including 802.1Qbg edge virtual bridging (EVB) module support. Also, FCoE initialization protocol (FIP) application type-length-value (TLV) parsing support, help on usage of the out-of-memory killer, manual page and documentation enhancements have been included. (BZ# 829816 , BZ# 893684 ) Bug Fix BZ# 903099 Due to a bug in the kernel, destroying an N_Port ID Virtualization (NPIV) port while using an ixbge adapter, the fcoe service init script could become unresponsive on shutdown. An init script patch has been applied to destroy the associated virtual ports first, and the fcoe service no longer hangs in the described scenario. Enhancement BZ# 981062 The readme file has been updated with a note clarifying that the file system automounting feature is enabled in the default installation of Red Hat Enterprise Linux 6. Users of fcoe-utils, libhbalinux, libhbaapi, and lldpad are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/fcoe-utils
|
16.4. Configuring a Multihomed DHCP Server
|
16.4. Configuring a Multihomed DHCP Server A multihomed DHCP server serves multiple networks, that is, multiple subnets. The examples in these sections detail how to configure a DHCP server to serve multiple networks, select which network interfaces to listen on, and how to define network settings for systems that move networks. Before making any changes, back up the existing /etc/sysconfig/dhcpd and /etc/dhcp/dhcpd.conf files. The DHCP daemon listens on all network interfaces unless otherwise specified. Use the /etc/sysconfig/dhcpd file to specify which network interfaces the DHCP daemon listens on. The following /etc/sysconfig/dhcpd example specifies that the DHCP daemon listens on the eth0 and eth1 interfaces: If a system has three network interfaces cards - eth0 , eth1 , and eth2 - and it is only desired that the DHCP daemon listens on the eth0 card, then only specify eth0 in /etc/sysconfig/dhcpd : The following is a basic /etc/dhcp/dhcpd.conf file, for a server that has two network interfaces, eth0 in a 10.0.0.0/24 network, and eth1 in a 172.16.0.0/24 network. Multiple subnet declarations allow you to define different settings for multiple networks: subnet 10.0.0.0 netmask 255.255.255.0 ; A subnet declaration is required for every network your DHCP server is serving. Multiple subnets require multiple subnet declarations. If the DHCP server does not have a network interface in a range of a subnet declaration, the DHCP server does not serve that network. If there is only one subnet declaration, and no network interfaces are in the range of that subnet, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : option subnet-mask 255.255.255.0 ; The option subnet-mask option defines a subnet mask, and overrides the netmask value in the subnet declaration. In simple cases, the subnet and netmask values are the same. option routers 10.0.0.1 ; The option routers option defines the default gateway for the subnet. This is required for systems to reach internal networks on a different subnet, as well as external networks. range 10.0.0.5 10.0.0.15 ; The range option specifies the pool of available IP addresses. Systems are assigned an address from the range of specified IP addresses. For further information, see the dhcpd.conf(5) man page. 16.4.1. Host Configuration Before making any changes, back up the existing /etc/sysconfig/dhcpd and /etc/dhcp/dhcpd.conf files. Configuring a Single System for Multiple Networks The following /etc/dhcp/dhcpd.conf example creates two subnets, and configures an IP address for the same system, depending on which network it connects to: host example0 The host declaration defines specific parameters for a single system, such as an IP address. To configure specific parameters for multiple hosts, use multiple host declarations. Most DHCP clients ignore the name in host declarations, and as such, this name can be anything, as long as it is unique to other host declarations. To configure the same system for multiple networks, use a different name for each host declaration, otherwise the DHCP daemon fails to start. Systems are identified by the hardware ethernet option, not the name in the host declaration. hardware ethernet 00:1A:6B:6A:2E:0B ; The hardware ethernet option identifies the system. To find this address, run the ip link command. fixed-address 10.0.0.20 ; The fixed-address option assigns a valid IP address to the system specified by the hardware ethernet option. This address must be outside the IP address pool specified with the range option. If option statements do not end with a semicolon, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : Configuring Systems with Multiple Network Interfaces The following host declarations configure a single system, which has multiple network interfaces, so that each interface receives the same IP address. This configuration will not work if both network interfaces are connected to the same network at the same time: For this example, interface0 is the first network interface, and interface1 is the second interface. The different hardware ethernet options identify each interface. If such a system connects to another network, add more host declarations, remembering to: assign a valid fixed-address for the network the host is connecting to. make the name in the host declaration unique. When a name given in a host declaration is not unique, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : This error was caused by having multiple host interface0 declarations defined in /etc/dhcp/dhcpd.conf .
|
[
"DHCPDARGS=\"eth0 eth1\";",
"DHCPDARGS=\"eth0\";",
"default-lease-time 600 ; max-lease-time 7200 ; subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1; range 10.0.0.5 10.0.0.15; } subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 172.16.0.1; range 172.16.0.5 172.16.0.15; }",
"dhcpd: No subnet declaration for eth0 (0.0.0.0). dhcpd: ** Ignoring requests on eth0. If this is not what dhcpd: you want, please write a subnet declaration dhcpd: in your dhcpd.conf file for the network segment dhcpd: to which interface eth1 is attached. ** dhcpd: dhcpd: dhcpd: Not configured to listen on any interfaces!",
"default-lease-time 600 ; max-lease-time 7200 ; subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1; range 10.0.0.5 10.0.0.15; } subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 172.16.0.1; range 172.16.0.5 172.16.0.15; } host example0 { hardware ethernet 00:1A:6B:6A:2E:0B; fixed-address 10.0.0.20; } host example1 { hardware ethernet 00:1A:6B:6A:2E:0B; fixed-address 172.16.0.20; }",
"/etc/dhcp/dhcpd.conf line 20: semicolon expected. dhcpd: } dhcpd: ^ dhcpd: /etc/dhcp/dhcpd.conf line 38: unexpected end of file dhcpd: dhcpd: ^ dhcpd: Configuration file errors encountered -- exiting",
"host interface0 { hardware ethernet 00:1a:6b:6a:2e:0b; fixed-address 10.0.0.18; } host interface1 { hardware ethernet 00:1A:6B:6A:27:3A; fixed-address 10.0.0.18; }",
"dhcpd: /etc/dhcp/dhcpd.conf line 31: host interface0: already exists dhcpd: } dhcpd: ^ dhcpd: Configuration file errors encountered -- exiting"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-configuring_a_multihomed_dhcp_server
|
Chapter 13. Low latency tuning
|
Chapter 13. Low latency tuning 13.1. Understanding low latency The emergence of Edge computing in the area of Telco / 5G plays a key role in reducing latency and congestion problems and improving application performance. Simply put, latency determines how fast data (packets) moves from the sender to receiver and returns to the sender after processing by the receiver. Maintaining a network architecture with the lowest possible delay of latency speeds is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency numbers of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10. Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see Tuning for Zero Packet Loss in Red Hat OpenStack Platform (RHOSP) . The Edge computing initiative also comes in to play for reducing latency rates. Think of it as being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency. Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK). OpenShift Container Platform currently provides mechanisms to tune software on an OpenShift Container Platform cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and OpenShift Container Platform set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes. OpenShift Container Platform uses the Node Tuning Operator to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. OpenShift Container Platform also supports workload hints for the Node Tuning Operator that can tune the PerformanceProfile to meet the demands of different industry environments. Workload hints are available for highPowerConsumption (very low latency at the cost of increased power consumption) and realTime (priority given to optimum latency). A combination of true/false settings for these hints can be used to deal with application-specific workload profiles and requirements. Workload hints simplify the fine-tuning of performance to industry sector settings. Instead of a "one size fits all" approach, workload hints can cater to usage patterns such as placing priority on: Low latency Real-time capability Efficient use of power In an ideal world, all of those would be prioritized: in real life, some come at the expense of others. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the PerformanceProfile to fine tune the performance settings for the workload. The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management. In OpenShift Container Platform version 4.10 and versions, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance. Now this functionality is part of the Node Tuning Operator. 13.1.1. About hyperthreading for low latency and real-time applications Hyperthreading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyperthreading allows for better system throughput for certain workload types where parallel processing is beneficial. The default OpenShift Container Platform configuration expects hyperthreading to be enabled by default. For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyperthreading can slow performance times and negatively affect throughput for compute intensive workloads that require low latency. Disabling hyperthreading ensures predictable performance and can decrease processing times for these workloads. Note Hyperthreading implementation and configuration differs depending on the hardware you are running OpenShift Container Platform on. Consult the relevant host hardware tuning information for more details of the hyperthreading implementation specific to that hardware. Disabling hyperthreading can increase the cost per core of the cluster. Additional resources Configuring hyperthreading for a cluster 13.2. Provisioning real-time and low latency workloads Many industries and organizations need extremely high performance computing and might require low and predictable latency, especially in the financial and telecommunications industries. For these industries, with their unique requirements, OpenShift Container Platform provides the Node Tuning Operator to implement automatic tuning to achieve low latency performance and consistent response time for OpenShift Container Platform applications. The cluster administrator can use this performance profile configuration to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt (real-time), reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption. Warning The usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. It is recommended to use other probes, such as a properly configured set of network probes, as an alternative. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, these functions are part of the Node Tuning Operator. 13.2.1. Known limitations for real-time Note In most deployments, kernel-rt is supported only on worker nodes when you use a standard cluster with three control plane nodes and three worker nodes. There are exceptions for compact and single nodes on OpenShift Container Platform deployments. For installations on a single node, kernel-rt is supported on the single control plane node. To fully utilize the real-time mode, the containers must run with elevated privileges. See Set capabilities for a Container for information on granting privileges. OpenShift Container Platform restricts the allowed capabilities, so you might need to create a SecurityContext as well. Note This procedure is fully supported with bare metal installations using Red Hat Enterprise Linux CoreOS (RHCOS) systems. Establishing the right performance expectations refers to the fact that the real-time kernel is not a panacea. Its objective is consistent, low-latency determinism offering predictable response times. There is some additional kernel overhead associated with the real-time kernel. This is due primarily to handling hardware interruptions in separately scheduled threads. The increased overhead in some workloads results in some degradation in overall throughput. The exact amount of degradation is very workload dependent, ranging from 0% to 30%. However, it is the cost of determinism. 13.2.2. Provisioning a worker with real-time capabilities Optional: Add a node to the OpenShift Container Platform cluster. See Setting BIOS parameters for system tuning . Add the label worker-rt to the worker nodes that require the real-time capability by using the oc command. Create a new machine config pool for real-time nodes: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: "" Note that a machine config pool worker-rt is created for group of nodes that have the label worker-rt . Add the node to the proper machine config pool by using node role labels. Note You must decide which nodes are configured with real-time workloads. You could configure all of the nodes in the cluster, or a subset of the nodes. The Node Tuning Operator that expects all of the nodes are part of a dedicated machine config pool. If you use all of the nodes, you must point the Node Tuning Operator to the worker node role label. If you use a subset, you must group the nodes into a new machine config pool. Create the PerformanceProfile with the proper set of housekeeping cores and realTimeKernel: enabled: true . You must set machineConfigPoolSelector in PerformanceProfile : apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: ... realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: "" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt Verify that a matching machine config pool exists with a label: USD oc describe mcp/worker-rt Example output Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt OpenShift Container Platform will start configuring the nodes, which might involve multiple reboots. Wait for the nodes to settle. This can take a long time depending on the specific hardware you use, but 20 minutes per node is expected. Verify everything is working as expected. 13.2.3. Verifying the real-time kernel installation Use this command to verify that the real-time kernel is installed: USD oc get node -o wide Note the worker with the role worker-rt that contains the string 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.26.0-99.rhaos4.10.gitc3131de.el8 : NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.26.0 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.26.0-99.rhaos4.10.gitc3131de.el8 [...] 13.2.4. Creating a workload that works in real-time Use the following procedures for preparing a workload that will use real-time capabilities. Procedure Create a pod with a QoS class of Guaranteed . Optional: Disable CPU load balancing for DPDK. Assign a proper node selector. When writing your applications, follow the general recommendations described in Application tuning and deployment . 13.2.5. Creating a pod with a QoS class of Guaranteed Keep the following in mind when you create a pod that is given a QoS class of Guaranteed : Every container in the pod must have a memory limit and a memory request, and they must be the same. Every container in the pod must have a CPU limit and a CPU request, and they must be the same. The following example shows the configuration file for a pod that has one container. The container has a memory limit and a memory request, both equal to 200 MiB. The container has a CPU limit and a CPU request, both equal to 1 CPU. apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: "200Mi" cpu: "1" requests: memory: "200Mi" cpu: "1" Create the pod: USD oc apply -f qos-pod.yaml --namespace=qos-example View detailed information about the pod: USD oc get pod qos-demo --namespace=qos-example --output=yaml Example output spec: containers: ... status: qosClass: Guaranteed Note If a container specifies its own memory limit, but does not specify a memory request, OpenShift Container Platform automatically assigns a memory request that matches the limit. Similarly, if a container specifies its own CPU limit, but does not specify a CPU request, OpenShift Container Platform automatically assigns a CPU request that matches the limit. 13.2.6. Optional: Disabling CPU load balancing for DPDK Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met. The pod must use the performance-<profile-name> runtime class. You can get the proper name by looking at the status of the performance profile, as shown here: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile ... status: ... runtimeClass: performance-manual Note Currently, disabling CPU load balancing is not supported with cgroup v2. The Node Tuning Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as default runtime handler except it enables the CPU load balancing configuration functionality. To disable the CPU load balancing for the pod, the Pod specification must include the following fields: apiVersion: v1 kind: Pod metadata: ... annotations: ... cpu-load-balancing.crio.io: "disable" ... ... spec: ... runtimeClassName: performance-<profile_name> ... Note Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster. 13.2.7. Assigning a proper node selector The preferred way to assign a pod to nodes is to use the same node selector the performance profile used, as shown here: apiVersion: v1 kind: Pod metadata: name: example spec: # ... nodeSelector: node-role.kubernetes.io/worker-rt: "" For more information, see Placing pods on specific nodes using node selectors . 13.2.8. Scheduling a workload onto a worker with real-time capabilities Use label selectors that match the nodes attached to the machine config pool that was configured for low latency by the Node Tuning Operator. For more information, see Assigning pods to nodes . 13.2.9. Reducing power consumption by taking CPUs offline You can generally anticipate telecommunication workloads. When not all of the CPU resources are required, the Node Tuning Operator allows you take unused CPUs offline to reduce power consumption by manually updating the performance profile. To take unused CPUs offline, you must perform the following tasks: Set the offline CPUs in the performance profile and save the contents of the YAML file: Example performance profile with offlined CPUs apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: "2-23,26-47" reserved: "0,1,24,25" offlined: "48-59" 1 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true 1 Optional. You can list CPUs in the offlined field to take the specified CPUs offline. Apply the updated profile by running the following command: USD oc apply -f my-performance-profile.yaml 13.2.10. Optional: Power saving configurations You can enable power savings for a node that has low priority workloads that are colocated with high priority workloads without impacting the latency or throughput of the high priority workloads. Power saving is possible without modifications to the workloads themselves. Important The feature is supported on Intel Ice Lake and later generations of Intel CPUs. The capabilities of the processor might impact the latency and throughput of the high priority workloads. When you configure a node with a power saving configuration, you must configure high priority workloads with performance configuration at the pod level, which means that the configuration applies to all the cores used by the pod. By disabling P-states and C-states at the pod level, you can configure high priority workloads for best performance and lowest latency. Table 13.1. Configuration for high priority workloads Annotation Description annotations: cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>" Provides the best performance for a pod by disabling C-states and specifying the governor type for CPU scaling. The performance governor is recommended for high priority workloads. Prerequisites You enabled C-states and OS-controlled P-states in the BIOS Procedure Generate a PerformanceProfile with per-pod-power-management set to true : USD podman run --entrypoint performance-profile-creator -v \ /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.13 \ --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true \ --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node \ --must-gather-dir-path /must-gather -power-consumption-mode=low-latency \ 1 --per-pod-power-management=true > my-performance-profile.yaml 1 The power-consumption-mode must be default or low-latency when the per-pod-power-management is set to true . Example PerformanceProfile with perPodPowerManagement apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Set the default cpufreq governor as an additional kernel argument in the PerformanceProfile custom resource (CR): apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: ... additionalKernelArgs: - cpufreq.default_governor=schedutil 1 1 Using the schedutil governor is recommended, however, you can use other governors such as the ondemand or powersave governors. Set the maximum CPU frequency in the TunedPerformancePatch CR: spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1 1 The max_perf_pct controls the maximum frequency the cpufreq driver is allowed to set as a percentage of the maximum supported cpu frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Add the desired annotations to your high priority workload pods. The annotations override the default settings. Example high priority workload annotation apiVersion: v1 kind: Pod metadata: ... annotations: ... cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>" ... ... spec: ... runtimeClassName: performance-<profile_name> ... Restart the pods. Additional resources Recommended firmware configuration for vDU cluster hosts . Placing pods on specific nodes using node selectors . 13.2.11. Managing device interrupt processing for guaranteed pod isolated CPUs The Node Tuning Operator can manage host CPUs by dividing them into reserved CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolated CPUs for application containers to run the workloads. This allows you to set CPUs for low latency workloads as isolated. Device interrupts are load balanced between all isolated and reserved CPUs to avoid CPUs being overloaded, with the exception of CPUs where there is a guaranteed pod running. Guaranteed pod CPUs are prevented from processing device interrupts when the relevant annotations are set for the pod. In the performance profile, globallyDisableIrqLoadBalancing is used to manage whether device interrupts are processed or not. For certain workloads, the reserved CPUs are not always sufficient for dealing with device interrupts, and for this reason, device interrupts are not globally disabled on the isolated CPUs. By default, Node Tuning Operator does not disable device interrupts on isolated CPUs. To achieve low latency for workloads, some (but not all) pods require the CPUs they are running on to not process device interrupts. A pod annotation, irq-load-balancing.crio.io , is used to define whether device interrupts are processed or not. When configured, CRI-O disables device interrupts only as long as the pod is running. 13.2.11.1. Disabling CPU CFS quota To reduce CPU throttling for individual guaranteed pods, create a pod specification with the annotation cpu-quota.crio.io: "disable" . This annotation disables the CPU completely fair scheduler (CFS) quota at the pod run time. The following pod specification contains this annotation: apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> ... Note Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster. 13.2.11.2. Disabling global device interrupts handling in Node Tuning Operator To configure Node Tuning Operator to disable global device interrupts for the isolated CPU set, set the globallyDisableIrqLoadBalancing field in the performance profile to true . When true , conflicting pod annotations are ignored. When false , IRQ loads are balanced across all CPUs. A performance profile snippet illustrates this setting: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: globallyDisableIrqLoadBalancing: true ... 13.2.11.3. Disabling interrupt processing for individual pods To disable interrupt processing for individual pods, ensure that globallyDisableIrqLoadBalancing is set to false in the performance profile. Then, in the pod specification, set the irq-load-balancing.crio.io pod annotation to disable . The following pod specification contains this annotation: apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> ... 13.2.12. Upgrading the performance profile to use device interrupt processing When you upgrade the Node Tuning Operator performance profile custom resource definition (CRD) from v1 or v1alpha1 to v2, globallyDisableIrqLoadBalancing is set to true on existing profiles. Note globallyDisableIrqLoadBalancing toggles whether IRQ load balancing will be disabled for the Isolated CPU set. When the option is set to true it disables IRQ load balancing for the Isolated CPU set. Setting the option to false allows the IRQs to be balanced across all CPUs. 13.2.12.1. Supported API Versions The Node Tuning Operator supports v2 , v1 , and v1alpha1 for the performance profile apiVersion field. The v1 and v1alpha1 APIs are identical. The v2 API includes an optional boolean field globallyDisableIrqLoadBalancing with a default value of false . 13.2.12.1.1. Upgrading Node Tuning Operator API from v1alpha1 to v1 When upgrading Node Tuning Operator API version from v1alpha1 to v1, the v1alpha1 performance profiles are converted on-the-fly using a "None" Conversion strategy and served to the Node Tuning Operator with API version v1. 13.2.12.1.2. Upgrading Node Tuning Operator API from v1alpha1 or v1 to v2 When upgrading from an older Node Tuning Operator API version, the existing v1 and v1alpha1 performance profiles are converted using a conversion webhook that injects the globallyDisableIrqLoadBalancing field with a value of true . 13.3. Tuning nodes for low latency with the performance profile The performance profile lets you control latency tuning aspects of nodes that belong to a certain machine config pool. After you specify your settings, the PerformanceProfile object is compiled into multiple objects that perform the actual node level tuning: A MachineConfig file that manipulates the nodes. A KubeletConfig file that configures the Topology Manager, the CPU Manager, and the OpenShift Container Platform nodes. The Tuned profile that configures the Node Tuning Operator. You can use a performance profile to specify whether to update the kernel to kernel-rt, to allocate huge pages, and to partition the CPUs for performing housekeeping duties or running workloads. Note You can manually create the PerformanceProfile object or use the Performance Profile Creator (PPC) to generate a performance profile. See the additional resources below for more information on the PPC. Sample performance profile apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: "4-15" 1 reserved: "0-3" 2 hugepages: defaultHugepagesSize: "1G" pages: - size: "1G" count: 16 node: 0 realTimeKernel: enabled: true 3 numa: 4 topologyPolicy: "best-effort" nodeSelector: node-role.kubernetes.io/worker-cnf: "" 5 1 Use this field to isolate specific CPUs to use with application containers for workloads. Set an even number of isolated CPUs to enable the pods to run without errors when hyperthreading is enabled. 2 Use this field to reserve specific CPUs to use with infra containers for housekeeping. 3 Use this field to install the real-time kernel on the node. Valid values are true or false . Setting the true value installs the real-time kernel. 4 Use this field to configure the topology manager policy. Valid values are none (default), best-effort , restricted , and single-numa-node . For more information, see Topology Manager Policies . 5 Use this field to specify a node selector to apply the performance profile to specific nodes. Additional resources For information on using the Performance Profile Creator (PPC) to generate a performance profile, see Creating a performance profile . 13.3.1. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. Use the Node Tuning Operator to allocate huge pages on a specific node. OpenShift Container Platform provides a method for creating and allocating huge pages. Node Tuning Operator provides an easier method for doing this using the performance profile. For example, in the hugepages pages section of the performance profile, you can specify multiple blocks of size , count , and, optionally, node : hugepages: defaultHugepagesSize: "1G" pages: - size: "1G" count: 4 node: 0 1 1 node is the NUMA node in which the huge pages are allocated. If you omit node , the pages are evenly spread across all NUMA nodes. Note Wait for the relevant machine config pool status that indicates the update is finished. These are the only configuration steps you need to do to allocate huge pages. Verification To verify the configuration, see the /proc/meminfo file on the node: USD oc debug node/ip-10-0-141-105.ec2.internal # grep -i huge /proc/meminfo Example output AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ## Use oc describe to report the new size: USD oc describe node worker-0.ocp4poc.example.com | grep -i huge Example output hugepages-1g=true hugepages-###: ### hugepages-###: ### 13.3.2. Allocating multiple huge page sizes You can request huge pages with different sizes under the same container. This allows you to define more complicated pods consisting of containers with different huge page size needs. For example, you can define sizes 1G and 2M and the Node Tuning Operator will configure both sizes on the node, as shown here: spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G 13.3.3. Configuring a node for IRQ dynamic load balancing Configure a cluster node for IRQ dynamic load balancing to control which cores can receive device interrupt requests (IRQ). Prerequisites For core isolation, all server hardware components must support IRQ affinity. To check if the hardware components of your server support IRQ affinity, view the server's hardware specifications or contact your hardware provider. Procedure Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Set the performance profile apiVersion to use performance.openshift.io/v2 . Remove the globallyDisableIrqLoadBalancing field or set it to false . Set the appropriate isolated and reserved CPUs. The following snippet illustrates a profile that reserves 2 CPUs. IRQ load-balancing is enabled for pods running on the isolated CPU set: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1 ... Note When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. Create the pod that uses exclusive CPUs, and set irq-load-balancing.crio.io and cpu-quota.crio.io annotations to disable . For example: apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" spec: containers: - name: dynamic-irq-pod image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13" command: ["sleep", "10h"] resources: requests: cpu: 2 memory: "200M" limits: cpu: 2 memory: "200M" nodeSelector: node-role.kubernetes.io/worker-cnf: "" runtimeClassName: performance-dynamic-irq-profile ... Enter the pod runtimeClassName in the form performance-<profile_name>, where <profile_name> is the name from the PerformanceProfile YAML, in this example, performance-dynamic-irq-profile . Set the node selector to target a cnf-worker. Ensure the pod is running correctly. Status should be running , and the correct cnf-worker node should be set: USD oc get pod -o wide Expected output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none> Get the CPUs that the pod configured for IRQ dynamic load balancing runs on: USD oc exec -it dynamic-irq-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'" Expected output Cpus_allowed_list: 2-3 Ensure the node configuration is applied correctly. Log in to the node to verify the configuration. USD oc debug node/<node-name> Expected output Starting pod/<node-name>-debug ... To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4# Verify that you can use the node file system: sh-4.4# chroot /host Expected output sh-4.4# Ensure the default system CPU affinity mask does not include the dynamic-irq-pod CPUs, for example, CPUs 2 and 3. USD cat /proc/irq/default_smp_affinity Example output 33 Ensure the system IRQs are not configured to run on the dynamic-irq-pod CPUs: find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="USD1"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \; Example output /proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5 13.3.4. About support of IRQ affinity setting Some IRQ controllers lack support for IRQ affinity setting and will always expose all online CPUs as the IRQ mask. These IRQ controllers effectively run on CPU 0. The following are examples of drivers and hardware that Red Hat are aware lack support for IRQ affinity setting. The list is, by no means, exhaustive: Some RAID controller drivers, such as megaraid_sas Many non-volatile memory express (NVMe) drivers Some LAN on motherboard (LOM) network controllers The driver uses managed_irqs Note The reason they do not support IRQ affinity setting might be associated with factors such as the type of processor, the IRQ controller, or the circuitry connections in the motherboard. If the effective affinity of any IRQ is set to an isolated CPU, it might be a sign of some hardware or driver not supporting IRQ affinity setting. To find the effective affinity, log in to the host and run the following command: USD find /proc/irq -name effective_affinity -printf "%p: " -exec cat {} \; Example output /proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2 Some drivers use managed_irqs , whose affinity is managed internally by the kernel and userspace cannot change the affinity. In some cases, these IRQs might be assigned to isolated CPUs. For more information about managed_irqs , see Affinity of managed interrupts cannot be changed even if they target isolated CPU . 13.3.5. Configuring hyperthreading for a cluster To configure hyperthreading for an OpenShift Container Platform cluster, set the CPU threads in the performance profile to the same cores that are configured for the reserved or isolated CPU pools. Note If you configure a performance profile, and subsequently change the hyperthreading configuration for the host, ensure that you update the CPU isolated and reserved fields in the PerformanceProfile YAML to match the new configuration. Warning Disabling a previously enabled host hyperthreading configuration can cause the CPU core IDs listed in the PerformanceProfile YAML to be incorrect. This incorrect configuration can cause the node to become unavailable because the listed CPUs can no longer be found. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI (oc). Procedure Ascertain which threads are running on what CPUs for the host you want to configure. You can view which threads are running on the host CPUs by logging in to the cluster and running the following command: USD lscpu --all --extended Example output CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000 In this example, there are eight logical CPU cores running on four physical CPU cores. CPU0 and CPU4 are running on physical Core0, CPU1 and CPU5 are running on physical Core 1, and so on. Alternatively, to view the threads that are set for a particular physical CPU core ( cpu0 in the example below), open a command prompt and run the following: USD cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list Example output 0-4 Apply the isolated and reserved CPUs in the PerformanceProfile YAML. For example, you can set logical cores CPU0 and CPU4 as isolated , and logical cores CPU1 to CPU3 and CPU5 to CPU7 as reserved . When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. ... cpu: isolated: 0,4 reserved: 1-3,5-7 ... Note The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. Important Hyperthreading is enabled by default on most Intel processors. If you enable hyperthreading, all threads processed by a particular core must be isolated or processed on the same core. 13.3.5.1. Disabling hyperthreading for low latency applications When configuring clusters for low latency processing, consider whether you want to disable hyperthreading before you deploy the cluster. To disable hyperthreading, do the following: Create a performance profile that is appropriate for your hardware and topology. Set nosmt as an additional kernel argument. The following example performance profile illustrates this setting: \ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true Note When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. 13.3.6. Understanding workload hints The following table describes how combinations of power consumption and real-time settings impact on latency. Note The following workload hints can be configured manually. You can also work with workload hints using the Performance Profile Creator. For more information about the performance profile, see the "Creating a performance profile" section. If the workload hint is configured manually and the realTime workload hint is not explicitly set then it defaults to true . Performance Profile creator setting Hint Environment Description Default workloadHints: highPowerConsumption: false realTime: false High throughput cluster without latency requirements Performance achieved through CPU partitioning only. Low-latency workloadHints: highPowerConsumption: false realTime: true Regional datacenters Both energy savings and low-latency are desirable: compromise between power management, latency and throughput. Ultra-low-latency workloadHints: highPowerConsumption: true realTime: true Far edge clusters, latency critical workloads Optimized for absolute minimal latency and maximum determinism at the cost of increased power consumption. Per-pod power management workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Critical and non-critical workloads Allows for power management per pod. Additional resources For information about using the Performance Profile Creator (PPC) to generate a performance profile, see Creating a performance profile . 13.3.7. Configuring workload hints manually Procedure Create a PerformanceProfile appropriate for the environment's hardware and topology as described in the table in "Understanding workload hints". Adjust the profile to match the expected workload. In this example, we tune for the lowest possible latency. Add the highPowerConsumption and realTime workload hints. Both are set to true here. apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: ... workloadHints: highPowerConsumption: true 1 realTime: true 2 1 If highPowerConsumption is true , the node is tuned for very low latency at the cost of increased power consumption. 2 Disables some debugging and monitoring features that can affect system latency. Note When the realTime workload hint flag is set to true in a performance profile, add the cpu-quota.crio.io: disable annotation to every guaranteed pod with pinned CPUs. This annotation is necessary to prevent the degradation of the process performance within the pod. If the realTime workload hint is not explicitly set then it defaults to true . Additional resources For information about reducing CPU throttling for individual guaranteed pods, see Disabling CPU CFS quota . 13.3.8. Restricting CPUs for infra and application containers Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Node Tuning Operator: Table 13.2. Process' CPU assignments Process type Details Burstable and BestEffort pods Runs on any CPU except where low latency workload is running Infrastructure pods Runs on any CPU except where low latency workload is running Interrupts Redirects to reserved CPUs (optional in OpenShift Container Platform 4.7 and later) Kernel processes Pins to reserved CPUs Latency-sensitive workload pods Pins to a specific set of exclusive CPUs from the isolated pool OS processes/systemd services Pins to reserved CPUs The allocatable capacity of cores on a node for pods of all QoS process types, Burstable , BestEffort , or Guaranteed , is equal to the capacity of the isolated pool. The capacity of the reserved pool is removed from the node's total core capacity for use by the cluster and operating system housekeeping duties. Example 1 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 25 cores to QoS Guaranteed pods and 25 cores for BestEffort or Burstable pods. This matches the capacity of the isolated pool. Example 2 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 50 cores to QoS Guaranteed pods and one core for BestEffort or Burstable pods. This exceeds the capacity of the isolated pool by one core. Pod scheduling fails because of insufficient CPU capacity. The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows: If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node. The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In 4.13 and later versions, workloads can optionally be labeled as sensitive. The decision regarding which specific CPUs should be used for reserved and isolated partitions requires detailed analysis and measurements. Factors like NUMA affinity of devices and memory play a role. The selection also depends on the workload architecture and the specific use case. Important The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the spec section of the performance profile. isolated - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth. reserved - Specifies the CPUs for the cluster and operating system housekeeping duties. Threads in the reserved group are often busy. Do not run latency-sensitive applications in the reserved group. Latency-sensitive applications run in the isolated group. Procedure Create a performance profile appropriate for the environment's hardware and topology. Add the reserved and isolated parameters with the CPUs you want reserved and isolated for the infra and application containers: \ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: "0-4,9" 1 isolated: "5-8" 2 nodeSelector: 3 node-role.kubernetes.io/worker: "" 1 Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties. 2 Specify which CPUs are for application containers to run workloads. 3 Optional: Specify a node selector to apply the performance profile to specific nodes. Additional resources Managing device interrupt processing for guaranteed pod isolated CPUs Create a pod that gets assigned a QoS class of Guaranteed 13.4. Reducing NIC queues using the Node Tuning Operator The Node Tuning Operator allows you to adjust the network interface controller (NIC) queue count for each network device. By using a PerformanceProfile, the amount of queues can be reduced to the number of reserved CPUs. 13.4.1. Adjusting the NIC queues with the performance profile The performance profile lets you adjust the queue count for each network device. Supported network devices: Non-virtual network devices Network devices that support multiple queues (channels) Unsupported network devices: Pure software network interfaces Block devices Intel DPDK virtual functions Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform cluster running the Node Tuning Operator as a user with cluster-admin privileges. Create and apply a performance profile appropriate for your hardware and topology. For guidance on creating a profile, see the "Creating a performance profile" section. Edit this created performance profile: USD oc edit -f <your_profile_name>.yaml Populate the spec field with the net object. The object list can contain two fields: userLevelNetworking is a required field specified as a boolean flag. If userLevelNetworking is true , the queue count is set to the reserved CPU count for all supported devices. The default is false . devices is an optional field specifying a list of devices that will have the queues set to the reserved CPU count. If the device list is empty, the configuration applies to all network devices. The configuration is as follows: interfaceName : This field specifies the interface name, and it supports shell-style wildcards, which can be positive or negative. Example wildcard syntax is as follows: <string> .* Negative rules are prefixed with an exclamation mark. To apply the net queue changes to all devices other than the excluded list, use !<device> , for example, !eno1 . vendorID : The network device vendor ID represented as a 16-bit hexadecimal number with a 0x prefix. deviceID : The network device ID (model) represented as a 16-bit hexadecimal number with a 0x prefix. Note When a deviceID is specified, the vendorID must also be defined. A device that matches all of the device identifiers specified in a device entry interfaceName , vendorID , or a pair of vendorID plus deviceID qualifies as a network device. This network device then has its net queues count set to the reserved CPU count. When two or more devices are specified, the net queues count is set to any net device that matches one of them. Set the queue count to the reserved CPU count for all devices by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices matching any of the defined device identifiers by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - interfaceName: "eth1" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices starting with the interface name eth by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth*" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices with an interface named anything other than eno1 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "!eno1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices that have an interface name eth0 , vendorID of 0x1af4 , and deviceID of 0x1000 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Apply the updated performance profile: USD oc apply -f <your_profile_name>.yaml Additional resources Creating a performance profile . 13.4.2. Verifying the queue status In this section, a number of examples illustrate different performance profiles and how to verify the changes are applied. Example 1 In this example, the net queue count is set to the reserved CPU count (2) for all supported devices. The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status before the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4 Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The combined channel shows that the total count of reserved CPUs for all supported devices is 2. This matches what is configured in the performance profile. Example 2 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices with a specific vendorID . The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4 # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is 2. For example, if there is another network device ens2 with vendorID=0x1af4 it will also have total net queues of 2. This matches what is configured in the performance profile. Example 3 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices that match any of the defined device identifiers. The command udevadm info provides a detailed report on a device. In this example the devices are: # udevadm info -p /sys/class/net/ens4 ... E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4 ... # udevadm info -p /sys/class/net/eth0 ... E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0 ... Set the net queues to 2 for a device with interfaceName equal to eth0 and any devices that have a vendorID=0x1af4 with the following performance profile: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4 ... Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is set to 2. For example, if there is another network device ens2 with vendorID=0x1af4 , it will also have the total net queues set to 2. Similarly, a device with interfaceName equal to eth0 will have total net queues set to 2. 13.4.3. Logging associated with adjusting NIC queues Log messages detailing the assigned devices are recorded in the respective Tuned daemon logs. The following messages might be recorded to the /var/log/tuned/tuned.log file: An INFO message is recorded detailing the successfully assigned devices: INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3 A WARNING message is recorded if none of the devices can be assigned: WARNING tuned.plugins.base: instance net_test: no matching devices available 13.5. Debugging low latency CNF tuning status The PerformanceProfile custom resource (CR) contains status fields for reporting tuning status and debugging latency degradation issues. These fields report on conditions that describe the state of the operator's reconciliation functionality. A typical issue can arise when the status of machine config pools that are attached to the performance profile are in a degraded state, causing the PerformanceProfile status to degrade. In this case, the machine config pool issues a failure message. The Node Tuning Operator contains the performanceProfile.spec.status.Conditions status field: Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded The Status field contains Conditions that specify Type values that indicate the status of the performance profile: Available All machine configs and Tuned profiles have been created successfully and are available for cluster components are responsible to process them (NTO, MCO, Kubelet). Upgradeable Indicates whether the resources maintained by the Operator are in a state that is safe to upgrade. Progressing Indicates that the deployment process from the performance profile has started. Degraded Indicates an error if: Validation of the performance profile has failed. Creation of all relevant components did not complete successfully. Each of these types contain the following fields: Status The state for the specific type ( true or false ). Timestamp The transaction timestamp. Reason string The machine readable reason. Message string The human readable reason describing the state and error details, if any. 13.5.1. Machine config pools A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance profiles that encompass kernel args, kube config, huge pages allocation, and deployment of rt-kernel. The Performance Profile controller monitors changes in the MCP and updates the performance profile status accordingly. The only conditions returned by the MCP to the performance profile status is when the MCP is Degraded , which leads to performanceProfile.status.condition.Degraded = true . Example The following example is for a performance profile with an associated machine config pool ( worker-cnf ) that was created for it: The associated machine config pool is in a degraded state: # oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h The describe section of the MCP shows the reason: # oc describe mcp worker-cnf Example output Message: Node node-worker-cnf is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found" Reason: 1 nodes are reporting degraded status on sync The degraded state should also appear under the performance profile status field marked as degraded = true : # oc describe performanceprofiles performance Example output Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found". Reason: MCPDegraded Status: True Type: Degraded 13.6. Collecting low latency tuning debugging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including node tuning, NUMA topology, and other information needed to debug issues with low latency setup. For prompt support, supply diagnostic information for both OpenShift Container Platform and low latency tuning. 13.6.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as: Resource definitions Audit logs Service logs You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product. When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in your current working directory. 13.6.2. About collecting low latency tuning data Use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with low latency tuning, including: The Node Tuning Operator namespaces and child objects. MachineConfigPool and associated MachineConfig objects. The Node Tuning Operator and associated Tuned objects. Linux Kernel command line options. CPU and NUMA topology Basic PCI device information and NUMA locality. To collect debugging information with must-gather , you must specify the Performance Addon Operator must-gather image: --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.13. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. However, you must still use the performance-addon-operator-must-gather image when running the must-gather command. 13.6.3. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11, these functions are part of the Node Tuning Operator. However, you must still use the performance-addon-operator-must-gather image when running the must-gather command. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to the Node Tuning Operator: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.13 2 1 The default OpenShift Container Platform must-gather image. 2 The must-gather image for low latency tuning diagnostics. Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . Additional resources For more information about MachineConfig and KubeletConfig, see Managing nodes . For more information about the Node Tuning Operator, see Using the Node Tuning Operator . For more information about the PerformanceProfile, see Configuring huge pages . For more information about consuming huge pages from your containers, see How huge pages are consumed by apps .
|
[
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: \"\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt",
"oc describe mcp/worker-rt",
"Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt",
"oc get node -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.26.0 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.26.0-99.rhaos4.10.gitc3131de.el8 [...]",
"apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\"",
"oc apply -f qos-pod.yaml --namespace=qos-example",
"oc get pod qos-demo --namespace=qos-example --output=yaml",
"spec: containers: status: qosClass: Guaranteed",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual",
"apiVersion: v1 kind: Pod metadata: annotations: cpu-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: v1 kind: Pod metadata: name: example spec: # nodeSelector: node-role.kubernetes.io/worker-rt: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: \"2-23,26-47\" reserved: \"0,1,24,25\" offlined: \"48-59\" 1 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true",
"oc apply -f my-performance-profile.yaml",
"annotations: cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"<governor>\"",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.13 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather -power-consumption-mode=low-latency \\ 1 --per-pod-power-management=true > my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - cpufreq.default_governor=schedutil 1",
"spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1",
"apiVersion: v1 kind: Pod metadata: annotations: cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"<governor>\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: globallyDisableIrqLoadBalancing: true",
"apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"4-15\" 1 reserved: \"0-3\" 2 hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true 3 numa: 4 topologyPolicy: \"best-effort\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 5",
"hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1",
"oc debug node/ip-10-0-141-105.ec2.internal",
"grep -i huge /proc/meminfo",
"AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##",
"oc describe node worker-0.ocp4poc.example.com | grep -i huge",
"hugepages-1g=true hugepages-###: ### hugepages-###: ###",
"spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1",
"apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: dynamic-irq-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.13\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" runtimeClassName: performance-dynamic-irq-profile",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>",
"oc exec -it dynamic-irq-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"",
"Cpus_allowed_list: 2-3",
"oc debug node/<node-name>",
"Starting pod/<node-name>-debug To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4#",
"sh-4.4# chroot /host",
"sh-4.4#",
"cat /proc/irq/default_smp_affinity",
"33",
"find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;",
"/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5",
"find /proc/irq -name effective_affinity -printf \"%p: \" -exec cat {} \\;",
"/proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2",
"lscpu --all --extended",
"CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000",
"cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list",
"0-4",
"cpu: isolated: 0,4 reserved: 1-3,5-7",
"\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true",
"workloadHints: highPowerConsumption: false realTime: false",
"workloadHints: highPowerConsumption: false realTime: true",
"workloadHints: highPowerConsumption: true realTime: true",
"workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: workloadHints: highPowerConsumption: true 1 realTime: true 2",
"\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"",
"oc edit -f <your_profile_name>.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"oc apply -f <your_profile_name>.yaml",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true",
"ethtool -l <device>",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4",
"ethtool -l <device>",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4",
"udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3",
"WARNING tuned.plugins.base: instance net_test: no matching devices available",
"Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h",
"oc describe mcp worker-cnf",
"Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync",
"oc describe performanceprofiles performance",
"Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded",
"--image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.13.",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.13 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/cnf-low-latency-tuning
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.71.1_toolset/making-open-source-more-inclusive
|
Replacing nodes
|
Replacing nodes Red Hat OpenShift Data Foundation 4.9 Instructions for how to safely replace a node in an OpenShift Data Foundation cluster. Red Hat Storage Documentation Team Abstract This document explains how to safely replace a node in a Red Hat OpenShift Data Foundation cluster.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_nodes/index
|
Chapter 1. Preparing your environment for installation
|
Chapter 1. Preparing your environment for installation 1.1. System requirements The following requirements apply to the networked base operating system: x86_64 architecture The latest version of Red Hat Enterprise Linux 9 or Red Hat Enterprise Linux 8 4-core 2.0 GHz CPU at a minimum A minimum of 12 GB RAM is required for Capsule Server to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. Capsule running with less RAM than the minimum value might not operate correctly. A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-) A current Red Hat Satellite subscription Administrative user (root) access Full forward and reverse DNS resolution using a fully-qualified domain name Satellite only supports UTF-8 encoding. If your territory is USA and your language is English, set en_US.utf-8 as the system-wide locale settings. For more information about configuring system locale in Red Hat Enterprise Linux, see Configuring the system locale in Red Hat Enterprise Linux 9 Configuring basic system settings . Your Satellite must have the Red Hat Satellite Infrastructure Subscription manifest in your Customer Portal. Satellite must have satellite-capsule-6.x repository enabled and synced. To create, manage, and export a Red Hat Subscription Manifest in the Customer Portal, see Creating and managing manifests for a connected Satellite Server in Subscription Central . Satellite Server and Capsule Server do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a Satellite. Before you install Capsule Server, ensure that your environment meets the requirements for installation. Warning The version of Capsule must match with the version of Satellite installed. It should not be different. For example, the Capsule version 6.16 cannot be registered with the Satellite version 6.15. Capsule Server must be installed on a freshly provisioned system that serves no other function except to run Capsule Server. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that Capsule Server creates: apache foreman-proxy postgres pulp puppet redis For more information on scaling your Capsule Servers, see Capsule Server scalability considerations . Certified hypervisors Capsule Server is fully supported on both physical systems and virtual machines that run on hypervisors that are supported to run Red Hat Enterprise Linux. For more information about certified hypervisors, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat OpenShift Virtualization and Red Hat Enterprise Linux with KVM . SELinux mode SELinux must be enabled, either in enforcing or permissive mode. Installation with disabled SELinux is not supported. Synchronized system clock The system clock on the base operating system where you are installing your Capsule Server must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail. For example, you can use the Chrony suite for timekeeping. For more information, see the following documents: Configuring time synchronization in Red Hat Enterprise Linux 9 Configuring basic system settings Configuring time synchronization in Red Hat Enterprise Linux 8 Configuring basic system settings FIPS mode You can install Capsule on a Red Hat Enterprise Linux system that is operating in FIPS mode. You cannot enable FIPS mode after the installation of Capsule. For more information, see Switching RHEL to FIPS mode in Red Hat Enterprise Linux 9 Security hardening or Switching RHEL to FIPS mode in Red Hat Enterprise Linux 8 Security hardening . Note Satellite supports DEFAULT and FIPS crypto-policies. The FUTURE crypto-policy is not supported for Satellite and Capsule installations. The FUTURE policy is a stricter forward-looking security level intended for testing a possible future policy. For more information, see Using system-wide cryptographic policies in Red Hat Enterprise Linux 9 Security hardening . 1.2. Storage requirements The following table details storage requirements for specific directories. These values are based on expected use case scenarios and can vary according to individual environments. The runtime size was measured with Red Hat Enterprise Linux 7, 8, and 9 repositories synchronized. Table 1.1. Storage requirements for Capsule Server installation Directory Installation Size Runtime Size /var/lib/pulp 1 MB 300 GB /var/lib/pgsql 100 MB 20 GB /usr 3 GB Not Applicable /opt/puppetlabs 500 MB Not Applicable The size of the PostgreSQL database on your Capsule Server can grow significantly with an increasing number of lifecycle environments, content views, or repositories that are synchronized from your Satellite Server. In the largest Satellite environments, the size of /var/lib/pgsql on Capsule Server can grow to double or triple the size of /var/lib/pgsql on your Satellite Server. 1.3. Storage guidelines Consider the following guidelines when installing Capsule Server to increase efficiency. If you mount the /tmp directory as a separate file system, you must use the exec mount option in the /etc/fstab file. If /tmp is already mounted with the noexec option, you must change the option to exec and re-mount the file system. This is a requirement for the puppetserver service to work. Because most Capsule Server data is stored in the /var directory, mounting /var on LVM storage can help the system to scale. Use high-bandwidth, low-latency storage for the /var/lib/pulp/ and PostgreSQL /var/lib/pgsql directories. As Red Hat Satellite has many operations that are I/O intensive, using high latency, low-bandwidth storage causes performance degradation. You can use the storage-benchmark script to get this data. For more information on using the storage-benchmark script, see Impact of Disk Speed on Satellite Operations . File system guidelines Do not use the GFS2 file system as the input-output latency is too high. Log file storage Log files are written to /var/log/messages/, /var/log/httpd/ , and /var/lib/foreman-proxy/openscap/content/ . You can manage the size of these files using logrotate . For more information, see How to use logrotate utility to rotate log files . The exact amount of storage you require for log messages depends on your installation and setup. SELinux considerations for NFS mount When the /var/lib/pulp directory is mounted using an NFS share, SELinux blocks the synchronization process. To avoid this, specify the SELinux context of the /var/lib/pulp directory in the file system table by adding the following lines to /etc/fstab : If NFS share is already mounted, remount it using the above configuration and enter the following command: Duplicated packages Packages that are duplicated in different repositories are only stored once on the disk. Additional repositories containing duplicate packages require less additional storage. The bulk of storage resides in the /var/lib/pulp/ directory. These end points are not manually configurable. Ensure that storage is available on the /var file system to prevent storage problems. Symbolic links You cannot use symbolic links for /var/lib/pulp/ . Synchronized RHEL ISO If you plan to synchronize RHEL content ISOs to Satellite, note that all minor versions of Red Hat Enterprise Linux also synchronize. You must plan to have adequate storage on your Satellite to manage this. 1.4. Supported operating systems You can install the operating system from a disc, local ISO image, Kickstart, or any other method that Red Hat supports. Red Hat Capsule Server is supported on the latest versions of Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 that are available at the time when Capsule Server is installed. versions of Red Hat Enterprise Linux including EUS or z-stream are not supported. The following operating systems are supported by the installer, have packages, and are tested for deploying Satellite: Table 1.2. Operating systems supported by satellite-installer Operating System Architecture Notes Red Hat Enterprise Linux 9 x86_64 only Red Hat Enterprise Linux 8 x86_64 only Red Hat advises against using an existing system because the Satellite installer will affect the configuration of several components. Red Hat Capsule Server requires a Red Hat Enterprise Linux installation with the @Base package group with no other package-set modifications, and without third-party configurations or software not directly necessary for the direct operation of the server. This restriction includes hardening and other non-Red Hat security software. If you require such software in your infrastructure, install and verify a complete working Capsule Server first, then create a backup of the system before adding any non-Red Hat software. Do not register Capsule Server to the Red Hat Content Delivery Network (CDN). Red Hat does not support using the system for anything other than running Capsule Server. 1.5. Port and firewall requirements For the components of Satellite architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls. The installation of a Capsule Server fails if the ports between Satellite Server and Capsule Server are not open before installation starts. Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol. Integrated Capsule Satellite Server has an integrated Capsule and any host that is directly connected to Satellite Server is a Client of Satellite in the context of this section. This includes the base operating system on which Capsule Server is running. Clients of Capsule Hosts which are clients of Capsules, other than Satellite's integrated Capsule, do not need access to Satellite Server. For more information on Satellite Topology, see Capsule networking in Overview, concepts, and deployment considerations . Required ports can change based on your configuration. The following tables indicate the destination port and the direction of network traffic: Table 1.3. Capsule incoming traffic Destination Port Protocol Service Source Required For Description 53 TCP and UDP DNS DNS Servers and clients Name resolution DNS (optional) 67 UDP DHCP Client Dynamic IP DHCP (optional) 69 UDP TFTP Client TFTP Server (optional) 443, 80 TCP HTTPS, HTTP Client Content Retrieval Content 443, 80 TCP HTTPS, HTTP Client Content Host Registration Capsule CA RPM installation 443 TCP HTTPS Red Hat Satellite Content Mirroring Management 443 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 443 TCP HTTPS Client Content Host registration Initiation Uploading facts Sending installed packages and traces 1883 TCP MQTT Client Pull based REX (optional) Content hosts for REX job notification (optional) 8000 TCP HTTP Client Provisioning templates Template retrieval for client installers, iPXE or UEFI HTTP Boot 8000 TCP HTTP Client PXE Boot Installation 8140 TCP HTTPS Client Puppet agent Client updates (optional) 8443 TCP HTTPS Client Content Host registration Deprecated and only needed for Client hosts deployed before upgrades 9090 TCP HTTPS Red Hat Satellite Capsule API Capsule functionality 9090 TCP HTTPS Client Register Endpoint Client registration with an external Capsule Server 9090 TCP HTTPS Client OpenSCAP Configure Client (if the OpenSCAP plugin is installed) 9090 TCP HTTPS Discovered Node Discovery Host discovery and provisioning (if the discovery plugin is installed) Any host that is directly connected to Satellite Server is a client in this context because it is a client of the integrated Capsule. This includes the base operating system on which a Capsule Server is running. A DHCP Capsule performs ICMP ping and TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free. This behavior can be turned off using satellite-installer --foreman-proxy-dhcp-ping-free-ip=false . Table 1.4. Capsule outgoing traffic Destination Port Protocol Service Destination Required For Description ICMP ping Client DHCP Free IP checking (optional) 7 TCP echo Client DHCP Free IP checking (optional) 22 TCP SSH Target host Remote execution Run jobs 53 TCP and UDP DNS DNS Servers on the Internet DNS Server Resolve DNS records (optional) 53 TCP and UDP DNS DNS Server Capsule DNS Validation of DNS conflicts (optional) 68 UDP DHCP Client Dynamic IP DHCP (optional) 443 TCP HTTPS Satellite Capsule Capsule Configuration management Template retrieval OpenSCAP Remote Execution result upload 443 TCP HTTPS Red Hat Portal SOS report Assisting support cases (optional) 443 TCP HTTPS Satellite Content Sync 443 TCP HTTPS Satellite Client communication Forward requests from Client to Satellite 443 TCP HTTPS Infoblox DHCP Server DHCP management When using Infoblox for DHCP, management of the DHCP leases (optional) 623 Client Power management BMC On/Off/Cycle/Status 7911 TCP DHCP, OMAPI DHCP Server DHCP The DHCP target is configured using --foreman-proxy-dhcp-server and defaults to localhost ISC and remote_isc use a configurable port that defaults to 7911 and uses OMAPI 8443 TCP HTTPS Client Discovery Capsule sends reboot command to the discovered host (optional) Note ICMP to Port 7 UDP and TCP must not be rejected, but can be dropped. The DHCP Capsule sends an ECHO REQUEST to the Client network to verify that an IP address is free. A response prevents IP addresses from being allocated. 1.6. Enabling connections from Satellite Server and clients to a Capsule Server On the base operating system on which you want to install Capsule, you must enable incoming connections from Satellite Server and clients to Capsule Server and make these rules persistent across reboots. Procedure Open the ports for clients on Capsule Server: Allow access to services on Capsule Server: Make the changes persistent: Verification Enter the following command: For more information, see Using and configuring firewalld in Red Hat Enterprise Linux 9 Configuring firewalls and packet filters or Using and configuring firewalld in Red Hat Enterprise Linux 8 Configuring and managing networking .
|
[
"nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2",
"restorecon -R /var/lib/pulp",
"firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"",
"firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster",
"firewall-cmd --runtime-to-permanent",
"firewall-cmd --list-all"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/preparing-environment-for-capsule-installation
|
22.8. JBoss Operations Network Plug-in Quickstart
|
22.8. JBoss Operations Network Plug-in Quickstart For testing or demonstrative purposes with a single JBoss Operations Network agent, upload the plug-in to the server then type "plugins update" at the agent command line to force a retrieval of the latest plugins from the server. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/jboss_operations_network_plug-in_quickstart1
|
Chapter 21. Avro DataFormat
|
Chapter 21. Avro DataFormat Available as of Camel version 2.14 This component provides a dataformat for avro, which allows serialization and deserialization of messages using Apache Avro's binary dataformat. Moreover, it provides support for Apache Avro's rpc, by providing producers and consumers endpoint for using avro over netty or http. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-avro</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 21.1. Apache Avro Overview Avro allows you to define message types and a protocol using a json like format and then generate java code for the specified types and messages. An example of how a schema looks like is below. {"namespace": "org.apache.camel.avro.generated", "protocol": "KeyValueProtocol", "types": [ {"name": "Key", "type": "record", "fields": [ {"name": "key", "type": "string"} ] }, {"name": "Value", "type": "record", "fields": [ {"name": "value", "type": "string"} ] } ], "messages": { "put": { "request": [{"name": "key", "type": "Key"}, {"name": "value", "type": "Value"} ], "response": "null" }, "get": { "request": [{"name": "key", "type": "Key"}], "response": "Value" } } } You can easily generate classes from a schema, using maven, ant etc. More details can be found at the Apache Avro documentation . However, it doesn't enforce a schema first approach and you can create schema for your existing classes. Since 2.12 you can use existing protocol interfaces to make RCP calls. You should use interface for the protocol itself and POJO beans or primitive/String classes for parameter and result types. Here is an example of the class that corresponds to schema above: package org.apache.camel.avro.reflection; public interface KeyValueProtocol { void put(String key, Value value); Value get(String key); } class Value { private String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } } Note: Existing classes can be used only for RPC (see below), not in data format. 21.2. Using the Avro data format Using the avro data format is as easy as specifying that the class that you want to marshal or unmarshal in your route. <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:in"/> <marshal> <avro instanceClass="org.apache.camel.dataformat.avro.Message"/> </marshal> <to uri="log:out"/> </route> </camelContext> An alternative can be to specify the dataformat inside the context and reference it from your route. <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <dataFormats> <avro id="avro" instanceClass="org.apache.camel.dataformat.avro.Message"/> </dataFormats> <route> <from uri="direct:in"/> <marshal ref="avro"/> <to uri="log:out"/> </route> </camelContext> In the same manner you can umarshal using the avro data format. 21.3. Avro Dataformat Options The Avro dataformat supports 2 options, which are listed below. Name Default Java Type Description instanceClassName String Class name to use for marshal and unmarshalling contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 21.4. Spring Boot Auto-Configuration The component supports 15 options, which are listed below. Name Description Default Type camel.component.avro.configuration.host Hostname to use String camel.component.avro.configuration.message-name The name of the message to send. String camel.component.avro.configuration.port Port number to use Integer camel.component.avro.configuration.protocol Avro protocol to use Protocol camel.component.avro.configuration.protocol-class-name Avro protocol to use defined by the FQN class name String camel.component.avro.configuration.protocol-location Avro protocol location String camel.component.avro.configuration.reflection-protocol If protocol object provided is reflection protocol. Should be used only with protocol parameter because for protocolClassName protocol type will be auto detected false Boolean camel.component.avro.configuration.single-parameter If true, consumer parameter won't be wrapped into array. Will fail if protocol specifies more then 1 parameter for the message false Boolean camel.component.avro.configuration.transport Transport to use, can be either http or netty AvroTransport camel.component.avro.configuration.uri-authority Authority to use (username and password) String camel.component.avro.enabled Enable avro component true Boolean camel.component.avro.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.dataformat.avro.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.avro.enabled Enable avro dataformat true Boolean camel.dataformat.avro.instance-class-name Class name to use for marshal and unmarshalling String ND
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-avro</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"{\"namespace\": \"org.apache.camel.avro.generated\", \"protocol\": \"KeyValueProtocol\", \"types\": [ {\"name\": \"Key\", \"type\": \"record\", \"fields\": [ {\"name\": \"key\", \"type\": \"string\"} ] }, {\"name\": \"Value\", \"type\": \"record\", \"fields\": [ {\"name\": \"value\", \"type\": \"string\"} ] } ], \"messages\": { \"put\": { \"request\": [{\"name\": \"key\", \"type\": \"Key\"}, {\"name\": \"value\", \"type\": \"Value\"} ], \"response\": \"null\" }, \"get\": { \"request\": [{\"name\": \"key\", \"type\": \"Key\"}], \"response\": \"Value\" } } }",
"package org.apache.camel.avro.reflection; public interface KeyValueProtocol { void put(String key, Value value); Value get(String key); } class Value { private String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } }",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:in\"/> <marshal> <avro instanceClass=\"org.apache.camel.dataformat.avro.Message\"/> </marshal> <to uri=\"log:out\"/> </route> </camelContext>",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <dataFormats> <avro id=\"avro\" instanceClass=\"org.apache.camel.dataformat.avro.Message\"/> </dataFormats> <route> <from uri=\"direct:in\"/> <marshal ref=\"avro\"/> <to uri=\"log:out\"/> </route> </camelContext>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/avro-dataformat
|
Chapter 5. Migrating from synchronization to trust automatically by using ipa-winsync-migrate
|
Chapter 5. Migrating from synchronization to trust automatically by using ipa-winsync-migrate In RHEL 8, the synchronization approach to integrating RHEL systems into Active Directory (AD) indirectly is deprecated. Red Hat recommends migrating to the approach based on a trust between Identity Management (IdM) and AD instead. This chapter describes how to migrate from synchronization to trust automatically, by using the ipa-winsync-migrate utility. 5.1. Automatic migration from synchronization to trust by using ipa-winsync-migrate The ipa-winsync-migrate utility migrates all synchronized users from an AD forest, while preserving the existing configuration in the Winsync environment and transferring it into the AD trust. For each AD user created by the Winsync agreement, ipa-winsync-migrate creates an ID override in the Default Trust View. After the migration completes: The ID overrides for the AD users have the following attributes copied from the original entry in Winsync: Login name ( uid ) UID number ( uidnumber ) GID number ( gidnumber ) Home directory ( homedirectory ) GECOS entry ( gecos ) The user accounts in the AD trust keep their original configuration in IdM, which includes: POSIX attributes User groups Role-based access control rules Host-based access control rules SELinux membership sudo rules The new AD users are added as members of an external IdM group. The original Winsync replication agreement, the original synchronized user accounts, and all local copies of the user accounts are removed. Additional resources How the Default Trust View works 5.2. Migrating from synchronization to trust by using ipa-winsync-migrate Prerequisites On RHEL 7, you configured synchronisation between RHEL Identity Management (IdM) and AD. Procedure Back up your IdM setup using the ipa-backup utility. See Backing up and restoring IdM . NOTE The migration affects a significant part of the IdM configuration and many user accounts. Creating a backup enables you to restore your original setup if necessary. Create a trust with the synchronized domain. See For details, see Installing trust between IdM and AD . Run ipa-winsync-migrate and specify the AD realm and the host name of the AD domain controller: If a conflict occurs in the overrides created by ipa-winsync-migrate , information about the conflict is displayed, but the migration continues. Uninstall the Password Sync service from the AD server. This removes the synchronization agreement from the AD domain controllers. Additional resources ipa-winsync-migrate (1) Backing up and restoring IdM servers using Ansible playbooks
|
[
"ipa-winsync-migrate --realm example.com --server ad.example.com"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/migrating_to_identity_management_on_rhel_8/migrating-from-synchronization-to-trust-automatically-by-using-ipa-winsync-migrate_migrating-an-existing-environment-from-synchronization-to-trust-in-the-context-of-integrating-a-linux-domain-with-an-active-directory-domain
|
7.166. openscap
|
7.166. openscap 7.166.1. RHBA-2013:0362 - openscap bug fix and enhancement update Updated openscap packages that fix various bugs and add several enhancements are now available for Red Hat Enterprise Linux 6. The openscap packages provide OpenSCAP, which is a set of open source libraries for the integration of the Security Content Automation Protocol (SCAP). SCAP is a line of standards that provide a standard language for the expression of Computer Network Defense (CND) related information. Note The openscap packages have been upgraded to upstream version 0.9.2, which provides a number of bug fixes and enhancements over the version. (BZ# 829349 ) All users of openscap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/openscap
|
Chapter 30. Introduction to NetworkManager Debugging
|
Chapter 30. Introduction to NetworkManager Debugging Increasing the log levels for all or certain domains helps to log more details of the operations that NetworkManager performs. You can use this information to troubleshoot problems. NetworkManager provides different levels and domains to produce logging information. The /etc/NetworkManager/NetworkManager.conf file is the main configuration file for NetworkManager. The logs are stored in the journal. 30.1. Introduction to NetworkManager reapply method The NetworkManager service uses a profile to manage the connection settings of a device. Desktop Bus (D-Bus) API can create, modify, and delete these connection settings. For any changes in a profile, D-Bus API clones the existing settings to the modified settings of a connection. Despite cloning, changes do not apply to the modified settings. To make it effective, reactivate the existing settings of a connection or use the reapply() method. The reapply() method has the following features: Updating modified connection settings without deactivation or restart of a network interface. Removing pending changes from the modified connection settings. As NetworkManager does not revert the manual changes, you can reconfigure the device and revert external or manual parameters. Creating different modified connection settings than that of the existing connection settings. Also, reapply() method supports the following attributes: bridge.ageing-time bridge.forward-delay bridge.group-address bridge.group-forward-mask bridge.hello-time bridge.max-age bridge.multicast-hash-max bridge.multicast-last-member-count bridge.multicast-last-member-interval bridge.multicast-membership-interval bridge.multicast-querier bridge.multicast-querier-interval bridge.multicast-query-interval bridge.multicast-query-response-interval bridge.multicast-query-use-ifaddr bridge.multicast-router bridge.multicast-snooping bridge.multicast-startup-query-count bridge.multicast-startup-query-interval bridge.priority bridge.stp bridge.VLAN-filtering bridge.VLAN-protocol bridge.VLANs 802-3-ethernet.accept-all-mac-addresses 802-3-ethernet.cloned-mac-address IPv4.addresses IPv4.dhcp-client-id IPv4.dhcp-iaid IPv4.dhcp-timeout IPv4.DNS IPv4.DNS-priority IPv4.DNS-search IPv4.gateway IPv4.ignore-auto-DNS IPv4.ignore-auto-routes IPv4.may-fail IPv4.method IPv4.never-default IPv4.route-table IPv4.routes IPv4.routing-rules IPv6.addr-gen-mode IPv6.addresses IPv6.dhcp-duid IPv6.dhcp-iaid IPv6.dhcp-timeout IPv6.DNS IPv6.DNS-priority IPv6.DNS-search IPv6.gateway IPv6.ignore-auto-DNS IPv6.may-fail IPv6.method IPv6.never-default IPv6.ra-timeout IPv6.route-metric IPv6.route-table IPv6.routes IPv6.routing-rules Additional resources nm-settings-nmcli(5) man page on your system 30.2. Setting the NetworkManager log level By default, all the log domains are set to record the INFO log level. Disable rate-limiting before collecting debug logs. With rate-limiting, systemd-journald drops messages if there are too many of them in a short time. This can occur when the log level is TRACE . This procedure disables rate-limiting and enables recording debug logs for the all (ALL) domains. Procedure To disable rate-limiting, edit the /etc/systemd/journald.conf file, uncomment the RateLimitBurst parameter in the [Journal] section, and set its value as 0 : Restart the systemd-journald service. Create the /etc/NetworkManager/conf.d/95-nm-debug.conf file with the following content: The domains parameter can contain multiple comma-separated domain:level pairs. Restart the NetworkManager service. Verification Query the systemd journal to display the journal entries of the NetworkManager unit: 30.3. Temporarily setting log levels at run time using nmcli You can change the log level at run time using nmcli . Procedure Optional: Display the current logging settings: To modify the logging level and domains, use the following options: To set the log level for all domains to the same LEVEL , enter: To change the level for specific domains, enter: Note that updating the logging level using this command disables logging for all the other domains. To change the level of specific domains and preserve the level of all other domains, enter: 30.4. Viewing NetworkManager logs You can view the NetworkManager logs for troubleshooting. Procedure To view the logs, enter: Additional resources NetworkManager.conf(5) and journalctl(1) man pages on your system 30.5. Debugging levels and domains You can use the levels and domains parameters to manage the debugging for NetworkManager. The level defines the verbosity level, whereas the domains define the category of the messages to record the logs with given severity ( level ). Log levels Description OFF Does not log any messages about NetworkManager ERR Logs only critical errors WARN Logs warnings that can reflect the operation INFO Logs various informational messages that are useful for tracking state and operations DEBUG Enables verbose logging for debugging purposes TRACE Enables more verbose logging than the DEBUG level Note that subsequent levels log all messages from earlier levels. For example, setting the log level to INFO also logs messages contained in the ERR and WARN log level. Additional resources NetworkManager.conf(5) man page on your system
|
[
"RateLimitBurst=0",
"systemctl restart systemd-journald",
"[logging] domains=ALL:TRACE",
"systemctl restart NetworkManager",
"journalctl -u NetworkManager Jun 30 15:24:32 server NetworkManager[164187]: <debug> [1656595472.4939] active-connection[0x5565143c80a0]: update activation type from assume to managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] device[55b33c3bdb72840c] (enp1s0): sys-iface-state: assume -> managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] l3cfg[4281fdf43e356454,ifindex=3]: commit type register (type \"update\", source \"device\", existing a369f23014b9ede3) -> a369f23014b9ede3 Jun 30 15:24:32 server NetworkManager[164187]: <info> [1656595472.4940] manager: NetworkManager state is now CONNECTED_SITE",
"nmcli general logging LEVEL DOMAINS INFO PLATFORM,RFKILL,ETHER,WIFI,BT,MB,DHCP4,DHCP6,PPP,WIFI_SCAN,IP4,IP6,A UTOIP4,DNS,VPN,SHARING,SUPPLICANT,AGENTS,SETTINGS,SUSPEND,CORE,DEVICE,OLPC, WIMAX,INFINIBAND,FIREWALL,ADSL,BOND,VLAN,BRIDGE,DBUS_PROPS,TEAM,CONCHECK,DC B,DISPATCH",
"nmcli general logging level LEVEL domains ALL",
"nmcli general logging level LEVEL domains DOMAINS",
"nmcli general logging level KEEP domains DOMAIN:LEVEL , DOMAIN:LEVEL",
"journalctl -u NetworkManager -b"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/introduction-to-networkmanager-debugging_configuring-and-managing-networking
|
Chapter 1. Server Variant
|
Chapter 1. Server Variant The following table lists all the packages in the Server variant. For more information about core packages, see the Scope of Coverage Details document. Package Core Package? License 389-ds-base No GPLv2 with exceptions 389-ds-base-libs No GPLv2 with exceptions ConsoleKit No GPLv2+ ConsoleKit-libs No MIT ConsoleKit-x11 No GPLv2+ DeviceKit-power No GPLv2+ ElectricFence Yes GPLv2 GConf2 No LGPLv2+ GConf2-devel Yes LGPLv2+ GConf2-gtk No LGPLv2+ ImageMagick Yes ImageMagick ImageMagick-c++ No ImageMagick MAKEDEV No GPLv2 ModemManager No GPLv2+ MySQL-python Yes GPLv2+ NetworkManager Yes GPLv2+ NetworkManager-glib No GPLv2+ NetworkManager-gnome Yes GPLv2+ NetworkManager-openswan Yes GPLv2+ ORBit2 No LGPLv2+ and GPLv2+ ORBit2-devel No LGPLv2+ and GPLv2+ OpenEXR-libs No BSD OpenIPMI Yes LGPLv2+ and GPLv2+ or BSD OpenIPMI-libs No LGPLv2+ and GPLv2+ or BSD PackageKit No GPLv2+ PackageKit-device-rebind No GPLv2+ PackageKit-glib No GPLv2+ PackageKit-gstreamer-plugin Yes GPLv2+ PackageKit-gtk-module No GPLv2+ PackageKit-yum No GPLv2+ PackageKit-yum-plugin No GPLv2+ PyGreSQL Yes MIT or Python PyKDE4 No LGPLv2+ PyPAM Yes LGPLv2 PyQt4 No GPLv3 or GPLv2 with exceptions PyQt4-devel Yes GPLv3 or GPLv2 with exceptions PyXML No MIT and Python and ZPLv1.0 and BSD PyYAML Yes MIT SDL No LGPLv2+ SDL-devel Yes LGPLv2+ SOAPpy No BSD and ZPLv2.0 TurboGears2 Yes MIT Xaw3d No MIT abrt No GPLv2+ abrt-addon-ccpp Yes GPLv2+ abrt-addon-kerneloops Yes GPLv2+ abrt-addon-python Yes GPLv2+ abrt-cli Yes GPLv2+ abrt-desktop Yes GPLv2+ abrt-gui Yes GPLv2+ Package Core Package? License abrt-libs No GPLv2+ abrt-python No GPLv2+ abrt-tui No GPLv2+ abyssinica-fonts Yes OFL acl Yes GPLv2+ acpid Yes GPLv2+ adcli Yes LGPLv2+ aic94xx-firmware Yes Redistributable, no modification permitted aide Yes GPLv2+ akonadi No LGPLv2+ alacarte Yes LGPLv2+ alsa-lib No LGPLv2+ alsa-lib-devel Yes LGPLv2+ alsa-plugins-pulseaudio Yes LGPLv2+ alsa-utils Yes GPLv2+ amanda No BSD and LGPLv2 and GPLv3+ and GPLv2 amanda-client Yes BSD and LGPLv2 and GPLv3+ and GPLv2 amanda-server Yes BSD and LGPLv2 and GPLv3+ and GPLv2 amtu Yes CPL anaconda No GPLv2+ anaconda-yum-plugins No GPLv2+ ant Yes ASL 2.0 and W3C ant-antlr No ASL 2.0 and W3C ant-apache-bcel No ASL 2.0 and W3C ant-apache-bsf No ASL 2.0 and W3C ant-apache-log4j No ASL 2.0 and W3C ant-apache-oro No ASL 2.0 and W3C ant-apache-regexp No ASL 2.0 and W3C ant-apache-resolver No ASL 2.0 and W3C ant-commons-logging No ASL 2.0 and W3C ant-commons-net No ASL 2.0 and W3C ant-javamail No ASL 2.0 and W3C ant-jdepend No ASL 2.0 and W3C ant-jsch No ASL 2.0 and W3C ant-junit No ASL 2.0 and W3C ant-nodeps No ASL 2.0 and W3C ant-swing No ASL 2.0 and W3C ant-trax No ASL 2.0 and W3C anthy No LGPLv2+ and GPLv2 antlr No Public Domain apache-jasper No ASL 2.0 apache-tomcat-apis No ASL 2.0 apr No ASL 2.0 apr-devel No ASL 2.0 apr-util No ASL 2.0 apr-util-devel No ASL 2.0 apr-util-ldap No ASL 2.0 arptables_jf Yes GPLv2+ arpwatch Yes BSD with advertising arts No LGPLv2+ Package Core Package? License arts-devel No LGPLv2+ aspell No LGPLv2 and LGPLv2+ and GPLv2+ and MIT at Yes GPLv2+ at-spi Yes LGPLv2+ at-spi-python No LGPLv2+ atk Yes LGPLv2+ atk-devel Yes LGPLv2+ atlas Yes BSD atlas-3dnow No BSD atlas-sse No BSD atlas-sse2 No BSD atlas-sse3 No BSD atlas-z10 No BSD atlas-z196 No BSD atmel-firmware Yes Redistributable, no modification permitted attr Yes GPLv2+ audiofile No LGPLv2+ audispd-plugins Yes GPLv2+ audit Yes GPLv2+ audit-libs No LGPLv2+ audit-libs-devel Yes LGPLv2+ audit-libs-python No LGPLv2+ audit-viewer Yes GPLv2 augeas-libs No LGPLv2+ authconfig Yes GPLv2+ authconfig-gtk Yes GPLv2+ authd Yes GPLv2+ autoconf Yes GPLv3+ and GFDL autofs Yes GPLv2+ automake Yes GPLv2+ and GFDL automoc No BSD avahi No LGPLv2 avahi-autoipd No LGPLv2 avahi-glib No LGPLv2 avahi-gobject No LGPLv2 avahi-libs No LGPLv2 avahi-tools No LGPLv2 avahi-ui No LGPLv2 avalon-framework No ASL 1.1 avalon-logkit No ASL 1.1 axis No ASL 2.0 b43-fwcutter Yes BSD b43-openfwwf Yes GPLv2 babel Yes BSD babl No LGPLv3+ and GPLv3+ bacula-client Yes GPLv2 with exceptions bacula-common No GPLv2 with exceptions basesystem Yes Public Domain bash Yes GPLv3+ batik No ASL 2.0 Package Core Package? License bc Yes GPLv2+ bcel No ASL 2.0 bfa-firmware Yes Redistributable, no modification permitted bind Yes ISC bind-chroot Yes ISC bind-dyndb-ldap Yes GPLv2+ bind-libs No ISC bind-utils Yes ISC binutils Yes GPLv3+ binutils-devel Yes GPLv3+ biosdevname Yes GPLv2 bison Yes GPLv3+ bitmap-fixed-fonts Yes GPLv2 bitmap-lucida-typewriter-fonts Yes Lucida blas No BSD blktrace Yes GPLv2+ bltk Yes BSD bluez No GPLv2+ bluez-libs No GPLv2+ boost No Boost boost-date-time No Boost boost-devel Yes Boost boost-filesystem No Boost boost-graph No Boost boost-iostreams No Boost boost-math No Boost boost-program-options No Boost boost-python No Boost boost-regex No Boost boost-serialization No Boost boost-signals No Boost boost-system No Boost boost-test No Boost boost-thread No Boost boost-wave No Boost bpg-chveulebrivi-fonts Yes GPL+ with exceptions bpg-courier-fonts Yes GPL+ with exceptions bpg-fonts-common No GPL+ with exceptions bpg-glaho-fonts Yes GPL+ with exceptions brasero No GPLv2+ brasero-libs No GPLv2+ brasero-nautilus Yes GPLv2+ bridge-utils Yes GPLv2+ brltty Yes GPLv2+ bsf No ASL 2.0 btparser No GPLv2+ btrfs-progs No GPLv2 busybox No GPLv2 byacc Yes Public Domain byzanz Yes GPLv3+ Package Core Package? License bzip2 Yes BSD bzip2-devel Yes BSD bzip2-libs No BSD bzr Yes GPLv2+ c-ares No MIT c2050 No GPL+ c2070 No GPL+ ca-certificates No Public Domain cachefilesd Yes GPL2+ cairo Yes LGPLv2 or MPLv1.1 cairo-devel Yes LGPLv2 or MPLv1.1 cairomm No LGPLv2+ cas Yes GPLv3+ ccid Yes LGPLv2+ cdparanoia No GPLv2 and LGPLv2 cdparanoia-libs No LGPLv2 cdrdao No GPLv2+ celt051 No BSD certmonger Yes GPLv3+ cgdcbxd Yes GPLv2 check No LGPLv2+ check-devel No LGPLv2+ checkpolicy No GPLv2 cheese Yes GPLv2+ chkconfig No GPLv2 chrony Yes GPLv2 chrpath Yes GPL+ cifs-utils Yes GPLv3 cim-schema No DMTF cjet No GPLv2+ cjkuni-fonts-common No Arphic cjkuni-fonts-ghostscript Yes Arphic cjkuni-ukai-fonts Yes Arphic cjkuni-uming-fonts Yes Arphic classpathx-jaf No GPLv2+ classpathx-mail No GPLv2+ with exceptions cloog-ppl No GPLv2+ cloud-init Yes GPLv3 clucene-core No LGPLv2+ or ASL 2.0 clutter No LGPLv2+ cmake Yes BSD and MIT and zlib compat-dapl Yes GPLv2 or BSD or CPL compat-db Yes BSD compat-db42 No BSD compat-db43 No BSD compat-expat1 Yes MIT compat-gcc-34 Yes GPLv2+ and GPLv2+ with exceptions compat-gcc-34-c++ Yes GPLv2+ and GPLv2+ with exceptions compat-gcc-34-g77 Yes GPLv2+ and GPLv2+ with exceptions compat-glibc Yes LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ Package Core Package? License compat-glibc-headers No LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ compat-libcap1 Yes BSD-like and LGPL compat-libf2c-34 Yes GPLv2+ and GPLv2+ with exceptions compat-libgcc-296 Yes GPLv2+ compat-libgfortran-41 Yes GPLv2+ with exceptions compat-libstdc++-295 Yes GPL compat-libstdc++-296 Yes GPLv2+ compat-libstdc++-33 Yes GPLv2+ with exceptions compat-libtermcap Yes GPLv2+ compat-libxcb No MIT compat-openldap Yes OpenLDAP compat-opensm-libs Yes GPLv2 or BSD compat-readline5 Yes GPLv2+ compat-xcb-util Yes MIT compiz No GPLv2+ and LGPLv2+ and MIT compiz-gnome Yes GPLv2+ and LGPLv2+ and MIT comps-extras No GPL+ and LGPL+ conman Yes GPLv3+ control-center Yes GPLv2+ and GFDL control-center-extra Yes GPLv2+ and GFDL control-center-filesystem No GPLv2+ and GFDL convmv Yes GPLv2 or GPLv3 coolkey Yes LGPLv2 copy-jdk-configs No BSD coreutils Yes GPLv3+ coreutils-libs No GPLv3+ cpio Yes GPLv3+ cpp No GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions cpufrequtils No GPLv2 cpuid Yes GPLv2+ cpupowerutils No GPLv2 cpuspeed Yes GPLv2+ cracklib No LGPLv2+ cracklib-dicts No LGPLv2+ cracklib-python No LGPLv2+ crash Yes GPLv3 crash-gcore-command Yes GPLv2 crash-trace-command Yes GPLv2 crda No ISC createrepo No GPLv2 cronie Yes MIT and BSD and ISC and GPLv2 cronie-anacron No MIT and BSD and ISC and GPLv2 cronie-noanacron No MIT and BSD and ISC and GPLv2 crontabs Yes Public Domain and GPLv2 crypto-utils Yes MIT and GPLv2+ cryptsetup-luks Yes GPLv2 cryptsetup-luks-libs No GPLv2 cryptsetup-reencrypt Yes GPLv2+ and LGPLv2+ cryptsetup-reencrypt-libs No GPLv2+ and LGPLv2+ cscope Yes BSD Package Core Package? License ctags Yes GPLv2+ or Public Domain ctags-etags Yes GPLv2+ or Public Domain ctan-cm-lgc-fonts-common No GPLv2+ with exceptions ctan-cm-lgc-roman-fonts No GPLv2+ with exceptions ctan-cm-lgc-sans-fonts No GPLv2+ with exceptions ctan-cm-lgc-typewriter-fonts No GPLv2+ with exceptions ctan-kerkis-fonts-common No LPPL ctan-kerkis-sans-fonts No LPPL ctan-kerkis-serif-fonts No LPPL ctapi-common No MIT culmus-aharoni-clm-fonts Yes GPLv2 culmus-caladings-clm-fonts Yes GPLv2 culmus-david-clm-fonts Yes GPLv2 culmus-drugulin-clm-fonts Yes GPLv2 culmus-ellinia-clm-fonts Yes GPLv2 culmus-fonts-common No GPLv2 culmus-frank-ruehl-clm-fonts Yes GPLv2 culmus-miriam-clm-fonts Yes GPLv2 culmus-miriam-mono-clm-fonts Yes GPLv2 culmus-nachlieli-clm-fonts Yes GPLv2 culmus-yehuda-clm-fonts Yes GPLv2 cups Yes GPLv2 cups-devel Yes LGPLv2 cups-libs No LGPLv2 cups-lpd Yes GPLv2 cups-pk-helper Yes GPLv2+ curl No MIT cvs Yes GPL+ and GPLv2+ and LGPL+ cvs-inetd Yes GPL+ cyrus-imapd Yes BSD cyrus-imapd-utils No BSD cyrus-sasl No BSD cyrus-sasl-devel Yes BSD cyrus-sasl-gssapi No BSD cyrus-sasl-lib No BSD cyrus-sasl-md5 No BSD cyrus-sasl-plain Yes BSD dapl Yes GPLv2 or BSD or CPL dash No BSD db4 Yes Sleepycat and BSD db4-cxx No Sleepycat and BSD db4-devel Yes Sleepycat and BSD db4-utils No Sleepycat and BSD dbus Yes GPLv2+ or AFL dbus-c++ No LGPLv2+ dbus-devel Yes GPLv2+ or AFL dbus-glib No AFL and GPLv2+ dbus-glib-devel Yes AFL and GPLv2+ dbus-libs Yes GPLv2+ or AFL dbus-python No MIT Package Core Package? License dbus-qt No AFL or GPLv2+ dbus-x11 No GPLv2+ or AFL dcraw Yes GPLv2+ dejagnu Yes GPLv2+ dejavu-fonts-common No Bitstream Vera and Public Domain dejavu-lgc-sans-mono-fonts No Bitstream Vera and Public Domain dejavu-sans-fonts Yes Bitstream Vera and Public Domain dejavu-sans-mono-fonts Yes Bitstream Vera and Public Domain dejavu-serif-fonts Yes Bitstream Vera and Public Domain deltarpm No BSD desktop-effects No GPLv2+ desktop-file-utils Yes GPLv2+ devhelp No GPLv2+ device-mapper No GPLv2 device-mapper-event No GPLv2 device-mapper-event-libs No LGPLv2 device-mapper-libs No LGPLv2 device-mapper-multipath Yes GPL+ device-mapper-multipath-libs No GPL+ device-mapper-persistent-data Yes GPLv3+ dhclient Yes ISC dhcp Yes ISC dhcp-common No ISC dialog No LGPLv2 diffstat Yes MIT diffutils No GPLv2+ dmidecode No GPLv2+ dmraid Yes GPLv2+ dmraid-events No GPLv2+ dmz-cursor-themes No CC-BY-SA dnsmasq Yes GPLv2 or GPLv3 docbook-dtds No
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-base-server-variant
|
Chapter 4. Setting up an authentication method for RHOSP
|
Chapter 4. Setting up an authentication method for RHOSP The high availability fence agents and resource agents support three authentication methods for communicating with RHOSP: Authentication with a clouds.yaml configuration file Authentication with an OpenRC environment script Authentication with a username and password through Pacemaker After determining the authentication method to use for the cluster, specify the appropriate authentication parameters when creating a fencing or cluster resource. 4.1. Authenticating with RHOSP by using a clouds.yaml file The procedures in this document that use a a clouds.yaml file for authentication use the clouds.yaml file shown in this procedure. Those procedures specify ha-example for the cloud= parameter , as defined in this file. Procedure On each node that will be part of your cluster, create a clouds.yaml file, as in the following example. For information about creating a clouds.yaml file, see Users and Identity Management Guide . Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command, substituting the name of the cloud you specified in the clouds.yaml file you created for ha-example . If this command does not display a server list, contact your RHOSP administrator. Specify the cloud parameter when creating a cluster resource or a fencing resource. 4.2. Authenticating with RHOSP by using an OpenRC environment script To use an OpenRC environment script to authenticate with RHOSP, perform the following steps. Procedure On each node that will be part of your cluster, configure an OpenRC environment script. For information about creating an OpenRC environment script, see Set environment variables using the OpenStack RC file . Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command. If this command does not display a server list, contact your RHOSP administrator. Specify the openrc parameter when creating a cluster resource or a fencing resource. 4.3. Authenticating with RHOSP by means of a username and password To authenticate with RHOSP by means of a username and password, specify the username , password , and auth_url parameters for a cluster resource or a fencing resource when you create the resource. Additional authentication parameters may be required, depending on the RHOSP configuration. The RHOSP administrator provides the authentication parameters to use.
|
[
"cat .config/openstack/clouds.yaml clouds: ha-example: auth: auth_url: https://<ip_address>:13000/ project_name: rainbow username: unicorns password: <password> user_domain_name: Default project_domain_name: Default <. . . additional options . . .> region_name: regionOne verify: False",
"openstack --os-cloud=ha-example server list",
"openstack server list"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/authentication-methods-for-rhosp_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
|
7.4. Configuring Transparent Huge Pages
|
7.4. Configuring Transparent Huge Pages Transparent Huge Pages (THP) is an alternative solution to HugeTLB. With THP, the kernel automatically assigns huge pages to processes, so huge pages do not need to be reserved manually. The THP feature has two modes of operation: system-wide and per-process. When THP is enabled system-wide, the kernel tries to assign huge pages to any process when it is possible to allocate huge pages and the process is using a large contiguous virtual memory area. If THP is enabled per-process, the kernel only assigns huge pages to individual processes' memory areas specified with the madvise() system call. Note that the THP feature only supports 2-MB pages. Transparent huge pages are enabled by default. To check the current status, run: To enable transparent huge pages, run: To prevent applications from allocating more memory resources than necessary, you can disable huge pages system-wide and only enable them inside MADV_HUGEPAGE madvise regions by running: To disable transparent huge pages, run: Sometimes, providing low latency to short-lived allocations has higher priority than immediately achieving the best performance with long-lived allocations. In such cases, direct compaction can be disabled while leaving THP enabled. Direct compaction is a synchronous memory compaction during the huge page allocation. Disabling direct compaction provides no guarantee of saving memory, but can decrease the risk of higher latencies during frequent page faults. Note that if the workload benefits significantly from THP, the performance decreases. To disable direct compaction, run: For comprehensive information on transparent huge pages, see the /usr/share/doc/kernel-doc- kernel_version /Documentation/vm/transhuge.txt file, which is available after installing the kernel-doc package.
|
[
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"echo always > /sys/kernel/mm/transparent_hugepage/enabled",
"echo madvise > /sys/kernel/mm/transparent_hugepage/enabled",
"echo never > /sys/kernel/mm/transparent_hugepage/enabled",
"echo madvise > /sys/kernel/mm/transparent_hugepage/defrag"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-configuring_transparent_huge_pages
|
Appendix B. Using Red Hat Maven repositories
|
Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . If you add the repository to your Maven settings, that configuration applies to all Maven projects owned by your user, as long as POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <pluginRepository> <id>red-hat-local</id> <url> USD{repository-url} </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat Revised on 2024-11-07 15:46:11 UTC
|
[
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <pluginRepository> <id>red-hat-local</id> <url> USD{repository-url} </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/getting_started_with_amq_broker/using_red_hat_maven_repositories
|
Chapter 3. User tasks
|
Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the Red Hat OpenShift Service on AWS web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an Red Hat OpenShift Service on AWS cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the Red Hat OpenShift Service on AWS web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the dedicated-admin and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, dedicated-admins or developers with proper access can now easily use the database with their applications.
|
[
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/operators/user-tasks
|
Part II. Known Issues
|
Part II. Known Issues This part documents known problems in Red Hat Enterprise Linux 6.10.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/part-red_hat_enterprise_linux-6.10_release_notes-known_issues
|
Chapter 8. Installing a cluster on GCP into a shared VPC
|
Chapter 8. Installing a cluster on GCP into a shared VPC In OpenShift Container Platform version 4.15, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation . The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have a GCP host project which contains a shared VPC network. You configured a GCP project to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see Attaching service projects in the GCP documentation . You have a GCP service account that has the required GCP permissions in both the host and service projects. 8.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.5. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) into a shared VPC, you must generate the install-config.yaml file and modify it so that the cluster uses the correct VPC networks, DNS zones, and project names. 8.5.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 8.5.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 8.5.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 8.5.4. Sample customized install-config.yaml file for shared VPC installation There are several configuration parameters which are required to install OpenShift Container Platform on GCP using a shared VPC. The following is a sample install-config.yaml file which demonstrates these fields. Important This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster. apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 10 1 credentialsMode must be set to Passthrough or Manual . See the "Prerequisites" section for the required GCP permissions that your service account must have. 2 The name of the subnet in the shared VPC for compute machines to use. 3 The name of the subnet in the shared VPC for control plane machines to use. 4 The name of the shared VPC. 5 The name of the host project where the shared VPC exists. 6 The name of the GCP project where you want to install the cluster. 7 8 9 Optional. One or more network tags to apply to compute machines, control plane machines, or all machines. 10 You can optionally provide the sshKey value that you use to access the machines in your cluster. 8.5.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 8.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 8.1. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 8.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 8.2. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 8.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 8.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 10",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_gcp/installing-gcp-shared-vpc
|
Chapter 20. Using the administration events page
|
Chapter 20. Using the administration events page You can view administration event information in a single interface with Red Hat Advanced Cluster Security for Kubernetes (RHACS). You can use this interface to help you understand and interpret important event details. 20.1. Accessing the event logs in different domains By viewing the administration events page, you can access various event logs in different domains. Procedure In the RHACS platform, go to Platform Configuration Administration Events . 20.2. Administration events page overview The administration events page organizes information in the following groups: Domain : Categorizes events by the specific area or domain within RHACS in which the event occurred. This classification helps organize and understand the context of events. The following domains are included: Authentication General Image Scanning Integrations Resource type : Classifies events based on the resource or component type involved. The following resource types are included: API Token Cluster Image Node Notifier Level : Indicates the severity or importance of an event. The following levels are included: Error Warning Success Info Unknown Event last occurred at : Provides information about the timestamp and date when an event occurred. It helps track the timing of events, which is essential for diagnosing issues and understanding the sequence of actions or incidents. Count : Indicates the number of times a particular event occurred. This number is useful in assessing the frequency of an issue. An event that has occurred multiple times indicates a persistent issue that you need to fix. Each event also gives you an indication of what you need to do to fix the error. 20.3. Getting information about the events in a particular domain By viewing the details of an administration event, you get more information about the events in that particular domain. This enables you to better understand the context and details of the events. Procedure In the Administration Events page, click the domain to view its details. 20.4. Administration event details overview The administration event provides log information that describes the error or event. The logs provide the following information: Context of the event Steps to take to fix the error The administration event page organizes information in the following groups: Resource type : Classifies events based on the resource or component type involved. The following resource types are included: API Token Cluster Image Node Notifier Resource name : Specifies the name of the resource or component to which the event refers. It identifies the specific instance within the domain where the event occurred. Event type : Specifies the source of the event. Central generates log events that correspond to administration events created from log statements. Event ID : A unique identifier composed of alphanumeric characters that is assigned to each event. Event IDs can be useful in identifying, tracking, and managing events over time. Created at : Indicates the timestamp and date when the event was originally created or recorded. Last occurred at : Specifies the timestamp and date when the event last occurred. This tracks the timing of the event, which can be critical for diagnosing and fixing recurring issues. Count : Indicates the number of times a particular event occurred. This number is useful in assessing the frequency of an issue. An event that has occurred multiple times indicates a persistent issue that you need to fix. 20.5. Setting the expiration of the administration events By specifying the number of days, you can control when the administration events expire. This is important for managing your events and ensuring that you retain the information for the desired duration. Note By default, administration events are retained for 4 days. The retention period for these events is determined by the time of the last occurrence and not by the time of creation. This means that an event expires and is deleted only if the time of the last occurrence exceeds the specified retention period. Procedure In the RHACS portal, go to Platform Configuration System Configuration . You can configure the following setting for administration events: Administration events retention days : The number of days to retain your administration events. To change this value, click Edit , make your changes, and then click Save .
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/using-the-administration-events-page
|
Release notes
|
Release notes OpenShift Container Platform 4.17 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/release_notes/index
|
10.3.3. Trouble with Partition Tables
|
10.3.3. Trouble with Partition Tables If you receive an error after the Disk Partitioning Setup ( Section 9.13, "Disk Partitioning Setup" ) phase of the installation saying something similar to The partition table on device hda was unreadable. To create new partitions it must be initialized, causing the loss of ALL DATA on this drive. you may not have a partition table on that drive or the partition table on the drive may not be recognizable by the partitioning software used in the installation program. Users who have used programs such as EZ-BIOS have experienced similar problems, causing data to be lost (assuming the data was not backed up before the installation began) that could not be recovered. No matter what type of installation you are performing, backups of the existing data on your systems should always be made.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-trouble-part-tables-x86
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.442/making-open-source-more-inclusive
|
1.2. Variable Name: EAP_HOME
|
1.2. Variable Name: EAP_HOME EAP_HOME refers to the root directory of the Red Hat JBoss Enterprise Application Platform installation on which JBoss Data Virtualization has been deployed.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/variable_name_eap_home
|
Chapter 2. Red Hat Quay support
|
Chapter 2. Red Hat Quay support Red Hat Quay provides support for the following: Multiple authentication and access methods Multiple storage backends Custom certificates for Quay , Clair , and storage backend containers Application registries Different container image types 2.1. Architecture Red Hat Quay includes several core components, both internal and external. For a fuller architectural breakdown, see the Red Hat Quay architecture guide. 2.1.1. Internal components Red Hat Quay includes the following internal components: Quay (container registry) . Runs the Quay container as a service, consisting of several components in the pod. Clair . Scans container images for vulnerabilities and suggests fixes. 2.1.2. External components Red Hat Quay includes the following external components: Database . Used by Red Hat Quay as its primary metadata storage. Note that this is not for image storage. Redis (key-value store) . Stores live builder logs and the Red Hat Quay tutorial. Also includes the locking mechanism that is required for garbage collection. Cloud storage . For supported deployments, one of the following storage types must be used: Public cloud storage . In public cloud environments, you should use the cloud provider's object storage, such as Amazon Web Services's Amazon S3 or Google Cloud's Google Cloud Storage. Private cloud storage . In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift. Warning Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Red Hat Quay test-only installations.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/poc-support
|
Chapter 6. Upgrading Hosts to Next Major Red Hat Enterprise Linux Release
|
Chapter 6. Upgrading Hosts to Major Red Hat Enterprise Linux Release You can use a job template to upgrade your Red Hat Enterprise Linux hosts to the major release. Below upgrade paths are possible: Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 Prerequisites Ensure that your Red Hat Enterprise Linux hosts meet the requirements for the upgrade. For Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 upgrade, see Planning an upgrade in Upgrading from RHEL 7 to RHEL 8 . For Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 upgrade, see Planning an upgrade to RHEL 9 in Upgrading from RHEL 8 to RHEL 9 . Prepare your Red Hat Enterprise Linux hosts for the upgrade. For Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 upgrade, see Preparing a RHEL 7 system for the upgrade in Upgrading from RHEL 7 to RHEL 8 . For Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 upgrade, see Preparing a RHEL 8 system for the upgrade in Upgrading from RHEL 8 to RHEL 9 . Enable remote execution feature on Satellite. For more information, see Chapter 12, Configuring and Setting Up Remote Jobs . Distribute Satellite SSH keys to the hosts that you want to upgrade. For more information, see Section 12.8, "Distributing SSH Keys for Remote Execution" . Procedure On Satellite, enable the Leapp plugin: In the Satellite web UI, navigate to Hosts > All Hosts . Select the hosts that you want to upgrade to the major Red Hat Enterprise Linux version. In the upper right of the Hosts window, from the Select Action list, select Preupgrade check with Leapp . Click Submit to start the pre-upgrade check. When the check is finished, click the Leapp preupgrade report tab to see if Leapp has found any issues on your hosts. Issues that have the Inhibitor flag are considered crucial and are likely to break the upgrade procedure. Issues that have the Has Remediation flag contain remediation that can help you fix the issue. Click an issue that is flagged as Has Remediation to expand it. If the issue contains a remediation Command , you can fix it directly from Satellite using remote execution. Select the issue. If the issue contains only a remediation Hint , use the hint to fix the issue on the host manually. Repeat this step for other issues. After you selected any issues with remediation commands, click Fix Selected and submit the job. After the issues are fixed, click the Rerun button, and then click Submit to run the pre-upgrade check again to verify that the hosts you are upgrading do not have any issues and are ready to be upgraded. If the pre-upgrade check verifies that the hosts do not have any issues, click the Run Upgrade button and click Submit to start the upgrade.
|
[
"satellite-installer --enable-foreman-plugin-leapp"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/upgrading_hosts_to_next_major_rhel_release_managing-hosts
|
Chapter 2. Workstation Variant
|
Chapter 2. Workstation Variant The following table lists all the packages in the Workstation variant. For more information about core packages, see the Scope of Coverage Details document. Package Core Package? License 389-ds-base No GPLv2 with exceptions 389-ds-base-libs No GPLv2 with exceptions ConsoleKit No GPLv2+ ConsoleKit-libs No MIT ConsoleKit-x11 No GPLv2+ DeviceKit-power No GPLv2+ ElectricFence Yes GPLv2 GConf2 No LGPLv2+ GConf2-devel Yes LGPLv2+ GConf2-gtk No LGPLv2+ ImageMagick Yes ImageMagick ImageMagick-c++ No ImageMagick MAKEDEV No GPLv2 ModemManager No GPLv2+ MySQL-python Yes GPLv2+ NetworkManager Yes GPLv2+ NetworkManager-glib No GPLv2+ NetworkManager-gnome Yes GPLv2+ NetworkManager-openswan Yes GPLv2+ ORBit2 No LGPLv2+ and GPLv2+ ORBit2-devel No LGPLv2+ and GPLv2+ OpenEXR-libs No BSD OpenIPMI Yes LGPLv2+ and GPLv2+ or BSD OpenIPMI-libs No LGPLv2+ and GPLv2+ or BSD PackageKit No GPLv2+ PackageKit-device-rebind No GPLv2+ PackageKit-glib No GPLv2+ PackageKit-gstreamer-plugin Yes GPLv2+ PackageKit-gtk-module No GPLv2+ PackageKit-yum No GPLv2+ PackageKit-yum-plugin No GPLv2+ PyGreSQL Yes MIT or Python PyKDE4 No LGPLv2+ PyPAM Yes LGPLv2 PyQt4 No GPLv3 or GPLv2 with exceptions PyQt4-devel Yes GPLv3 or GPLv2 with exceptions PyXML No MIT and Python and ZPLv1.0 and BSD PyYAML Yes MIT SDL No LGPLv2+ SDL-devel Yes LGPLv2+ SOAPpy No BSD and ZPLv2.0 TurboGears2 Yes MIT Xaw3d No MIT abrt No GPLv2+ abrt-addon-ccpp Yes GPLv2+ abrt-addon-kerneloops Yes GPLv2+ abrt-addon-python Yes GPLv2+ abrt-cli Yes GPLv2+ abrt-desktop Yes GPLv2+ abrt-gui Yes GPLv2+ Package Core Package? License abrt-libs No GPLv2+ abrt-python No GPLv2+ abrt-tui No GPLv2+ abyssinica-fonts Yes OFL acl Yes GPLv2+ acpid Yes GPLv2+ adcli Yes LGPLv2+ aic94xx-firmware Yes Redistributable, no modification permitted aide Yes GPLv2+ akonadi No LGPLv2+ alacarte Yes LGPLv2+ alsa-lib No LGPLv2+ alsa-lib-devel Yes LGPLv2+ alsa-plugins-pulseaudio Yes LGPLv2+ alsa-utils Yes GPLv2+ amanda No BSD and LGPLv2 and GPLv3+ and GPLv2 amanda-client Yes BSD and LGPLv2 and GPLv3+ and GPLv2 amtu Yes CPL anaconda No GPLv2+ anaconda-yum-plugins No GPLv2+ ant Yes ASL 2.0 and W3C ant-antlr No ASL 2.0 and W3C ant-apache-bcel No ASL 2.0 and W3C ant-apache-bsf No ASL 2.0 and W3C ant-apache-log4j No ASL 2.0 and W3C ant-apache-oro No ASL 2.0 and W3C ant-apache-regexp No ASL 2.0 and W3C ant-apache-resolver No ASL 2.0 and W3C ant-commons-logging No ASL 2.0 and W3C ant-commons-net No ASL 2.0 and W3C ant-javamail No ASL 2.0 and W3C ant-jdepend No ASL 2.0 and W3C ant-jsch No ASL 2.0 and W3C ant-junit No ASL 2.0 and W3C ant-nodeps No ASL 2.0 and W3C ant-swing No ASL 2.0 and W3C ant-trax No ASL 2.0 and W3C anthy No LGPLv2+ and GPLv2 antlr No Public Domain apache-jasper No ASL 2.0 apache-tomcat-apis No ASL 2.0 apr No ASL 2.0 apr-devel No ASL 2.0 apr-util No ASL 2.0 apr-util-devel No ASL 2.0 apr-util-ldap No ASL 2.0 arptables_jf Yes GPLv2+ arpwatch Yes BSD with advertising arts No LGPLv2+ arts-devel No LGPLv2+ Package Core Package? License aspell No LGPLv2 and LGPLv2+ and GPLv2+ and MIT at Yes GPLv2+ at-spi Yes LGPLv2+ at-spi-python No LGPLv2+ atk Yes LGPLv2+ atk-devel Yes LGPLv2+ atlas Yes BSD atlas-3dnow No BSD atlas-sse No BSD atlas-sse2 No BSD atlas-sse3 No BSD atmel-firmware Yes Redistributable, no modification permitted attr Yes GPLv2+ audiofile No LGPLv2+ audispd-plugins Yes GPLv2+ audit Yes GPLv2+ audit-libs No LGPLv2+ audit-libs-devel Yes LGPLv2+ audit-libs-python No LGPLv2+ audit-viewer Yes GPLv2 augeas-libs No LGPLv2+ authconfig Yes GPLv2+ authconfig-gtk Yes GPLv2+ authd Yes GPLv2+ autoconf Yes GPLv3+ and GFDL autocorr-af Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-bg Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-ca No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-cs Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-da Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-de Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-en No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-es Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-fa Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-fi Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-fr Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-ga No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-hr No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-hu Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-it Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-ja Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-ko Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-lb Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-lt No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-mn Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-nl Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-pl Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-pt Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-ro No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-ru Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 Package Core Package? License autocorr-sk Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-sl Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-sr No (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-sv Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-tr Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-vi Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autocorr-zh Yes (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and Artistic and MPLv2.0 and CC0 autofs Yes GPLv2+ automake Yes GPLv2+ and GFDL automoc No BSD avahi No LGPLv2 avahi-autoipd No LGPLv2 avahi-glib No LGPLv2 avahi-gobject No LGPLv2 avahi-libs No LGPLv2 avahi-tools No LGPLv2 avahi-ui No LGPLv2 avalon-framework No ASL 1.1 avalon-logkit No ASL 1.1 axis No ASL 2.0 b43-fwcutter Yes BSD b43-openfwwf Yes GPLv2 babel Yes BSD babl No LGPLv3+ and GPLv3+ bacula-client Yes GPLv2 with exceptions bacula-common No GPLv2 with exceptions baekmuk-ttf-batang-fonts No Baekmuk baekmuk-ttf-dotum-fonts No Baekmuk baekmuk-ttf-fonts-common No Baekmuk baekmuk-ttf-gulim-fonts No Baekmuk baekmuk-ttf-hline-fonts No Baekmuk basesystem Yes Public Domain bash Yes GPLv3+ batik No ASL 2.0 bc Yes GPLv2+ bcel No ASL 2.0 bfa-firmware Yes Redistributable, no modification permitted bind Yes ISC bind-chroot Yes ISC bind-dyndb-ldap Yes GPLv2+ bind-libs No ISC bind-utils Yes ISC binutils Yes GPLv3+ binutils-devel Yes GPLv3+ biosdevname Yes GPLv2 bison Yes GPLv3+ bitmap-console-fonts No GPLv2 bitmap-fangsongti-fonts No MIT bitmap-fixed-fonts Yes GPLv2 bitmap-lucida-typewriter-fonts Yes Lucida Package Core Package? License bitmap-miscfixed-fonts No Public Domain blas No BSD blktrace Yes GPLv2+ bltk Yes BSD bluez No GPLv2+ bluez-libs No GPLv2+ boost No Boost boost-date-time No Boost boost-devel Yes Boost boost-filesystem No Boost boost-graph No Boost boost-iostreams No Boost boost-math No Boost boost-program-options No Boost boost-python No Boost boost-regex No Boost boost-serialization No Boost boost-signals No Boost boost-system No Boost boost-test No Boost boost-thread No Boost boost-wave No Boost bpg-algeti-fonts No GPL+ with exceptions bpg-chveulebrivi-fonts Yes GPL+ with exceptions bpg-courier-fonts Yes GPL+ with exceptions bpg-courier-s-fonts No GPL+ with exceptions bpg-elite-fonts No GPL+ with exceptions bpg-fonts-common No GPL+ with exceptions bpg-glaho-fonts Yes GPL+ with exceptions bpg-ingiri-fonts No GPL+ with exceptions bpg-nino-medium-cond-fonts No GPL+ with exceptions bpg-nino-medium-fonts No GPL+ with exceptions bpg-sans-fonts No GPL+ with exceptions bpg-sans-medium-fonts No GPL+ with exceptions bpg-sans-modern-fonts No Bitstream Vera bpg-sans-regular-fonts No GPL+ with exceptions bpg-serif-fonts No GPL+ with exceptions bpg-serif-modern-fonts No Bitstream Vera brasero No GPLv2+ brasero-libs No GPLv2+ brasero-nautilus Yes GPLv2+ bridge-utils Yes GPLv2+ brltty Yes GPLv2+ bsf No ASL 2.0 btparser No GPLv2+ btrfs-progs No GPLv2 busybox No GPLv2 byacc Yes Public Domain byzanz Yes GPLv3+ bzip2 Yes BSD Package Core Package? License bzip2-devel Yes BSD bzip2-libs No BSD bzr Yes GPLv2+ c-ares No MIT c2050 No GPL+ c2070 No GPL+ ca-certificates No Public Domain cachefilesd Yes GPL2+ cairo Yes LGPLv2 or MPLv1.1 cairo-devel Yes LGPLv2 or MPLv1.1 cairomm No LGPLv2+ cas Yes GPLv3+ ccid Yes LGPLv2+ cdparanoia No GPLv2 and LGPLv2 cdparanoia-libs No LGPLv2 cdrdao No GPLv2+ celt051 No BSD certmonger Yes GPLv3+ cgdcbxd Yes GPLv2 check No LGPLv2+ check-devel No LGPLv2+ checkpolicy No GPLv2 cheese Yes GPLv2+ chkconfig No GPLv2 chrony Yes GPLv2 chrpath Yes GPL+ cifs-utils Yes GPLv3 cim-schema No DMTF cjet No GPLv2+ cjkuni-fonts-common No Arphic cjkuni-fonts-ghostscript Yes Arphic cjkuni-ukai-fonts Yes Arphic cjkuni-uming-fonts Yes Arphic classpathx-jaf No GPLv2+ classpathx-mail No GPLv2+ with exceptions cloog-ppl No GPLv2+ cloud-init Yes GPLv3 clucene-core No LGPLv2+ or ASL 2.0 clutter No LGPLv2+ cmake Yes BSD and MIT and zlib compat-dapl Yes GPLv2 or BSD or CPL compat-db Yes BSD compat-db42 No BSD compat-db43 No BSD compat-expat1 Yes MIT compat-gcc-34 Yes GPLv2+ and GPLv2+ with exceptions compat-gcc-34-c++ Yes GPLv2+ and GPLv2+ with exceptions compat-gcc-34-g77 Yes GPLv2+ and GPLv2+ with exceptions compat-glibc Yes LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ compat-glibc-headers No LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ Package Core Package? License compat-libcap1 Yes BSD-like and LGPL compat-libf2c-34 Yes GPLv2+ and GPLv2+ with exceptions compat-libgcc-296 Yes GPLv2+ compat-libgfortran-41 Yes GPLv2+ with exceptions compat-libstdc++-296 Yes GPLv2+ compat-libstdc++-33 Yes GPLv2+ with exceptions compat-libtermcap Yes GPLv2+ compat-libxcb No MIT compat-openldap Yes OpenLDAP compat-opensm-libs Yes GPLv2 or BSD compat-readline5 Yes GPLv2+ compat-xcb-util Yes MIT compiz No GPLv2+ and LGPLv2+ and MIT compiz-gnome Yes GPLv2+ and LGPLv2+ and MIT comps-extras No GPL+ and LGPL+ conman Yes GPLv3+ control-center Yes GPLv2+ and GFDL control-center-extra Yes GPLv2+ and GFDL control-center-filesystem No GPLv2+ and GFDL convmv Yes GPLv2 or GPLv3 coolkey Yes LGPLv2 copy-jdk-configs No BSD coreutils Yes GPLv3+ coreutils-libs No GPLv3+ cpio Yes GPLv3+ cpp No GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions cpufrequtils No GPLv2 cpuid Yes GPLv2+ cpupowerutils No GPLv2 cpuspeed Yes GPLv2+ cracklib No LGPLv2+ cracklib-dicts No LGPLv2+ cracklib-python No LGPLv2+ crash Yes GPLv3 crash-gcore-command Yes GPLv2 crash-trace-command Yes GPLv2 crda No ISC createrepo No GPLv2 cronie Yes MIT and BSD and ISC and GPLv2 cronie-anacron No MIT and BSD and ISC and GPLv2 cronie-noanacron No MIT and BSD and ISC and GPLv2 crontabs Yes Public Domain and GPLv2 crypto-utils Yes MIT and GPLv2+ cryptsetup-luks Yes GPLv2 cryptsetup-luks-libs No GPLv2 cryptsetup-reencrypt Yes GPLv2+ and LGPLv2+ cryptsetup-reencrypt-libs No GPLv2+ and LGPLv2+ cscope Yes BSD ctags Yes GPLv2+ or Public Domain ctags-etags Yes GPLv2+ or Public Domain Package Core Package? License ctan-cm-lgc-fonts-common No GPLv2+ with exceptions ctan-cm-lgc-roman-fonts No GPLv2+ with exceptions ctan-cm-lgc-sans-fonts No GPLv2+ with exceptions ctan-cm-lgc-typewriter-fonts No GPLv2+ with exceptions ctan-kerkis-calligraphic-fonts No LPPL ctan-kerkis-fonts-common No LPPL ctan-kerkis-sans-fonts No LPPL ctan-kerkis-serif-fonts No LPPL ctapi-common No MIT culmus-aharoni-clm-fonts Yes GPLv2 culmus-caladings-clm-fonts Yes GPLv2 culmus-david-clm-fonts Yes GPLv2 culmus-drugulin-clm-fonts Yes GPLv2 culmus-ellinia-clm-fonts Yes GPLv2 culmus-fonts-common No GPLv2 culmus-frank-ruehl-clm-fonts Yes GPLv2 culmus-miriam-clm-fonts Yes GPLv2 culmus-miriam-mono-clm-fonts Yes GPLv2 culmus-nachlieli-clm-fonts Yes GPLv2 culmus-yehuda-clm-fonts Yes GPLv2 cups Yes GPLv2 cups-devel Yes LGPLv2 cups-libs No LGPLv2 cups-lpd Yes GPLv2 cups-pk-helper Yes GPLv2+ curl No MIT cvs Yes GPL+ and GPLv2+ and LGPL+ cvs-inetd Yes GPL+ cyrus-imapd Yes BSD cyrus-imapd-utils No BSD cyrus-sasl No BSD cyrus-sasl-devel Yes BSD cyrus-sasl-gssapi No BSD cyrus-sasl-lib No BSD cyrus-sasl-md5 No BSD cyrus-sasl-plain Yes BSD dapl Yes GPLv2 or BSD or CPL dash No BSD db4 Yes Sleepycat and BSD db4-cxx No Sleepycat and BSD db4-devel Yes Sleepycat and BSD db4-utils No Sleepycat and BSD dbus Yes GPLv2+ or AFL dbus-c++ No LGPLv2+ dbus-devel Yes GPLv2+ or AFL dbus-glib No AFL and GPLv2+ dbus-glib-devel Yes AFL and GPLv2+ dbus-libs Yes GPLv2+ or AFL dbus-python No MIT dbus-qt No AFL or GPLv2+ Package Core Package? License dbus-x11 No GPLv2+ or AFL dcraw Yes GPLv2+ dejagnu Yes GPLv2+ dejavu-fonts-common No Bitstream Vera and Public Domain dejavu-lgc-sans-fonts No Bitstream Vera and Public Domain dejavu-lgc-sans-mono-fonts No Bitstream Vera and Public Domain dejavu-lgc-serif-fonts No Bitstream Vera and Public Domain dejavu-sans-fonts Yes Bitstream Vera and Public Domain dejavu-sans-mono-fonts Yes Bitstream Vera and Public Domain dejavu-serif-fonts Yes Bitstream Vera and Public Domain deltarpm No BSD desktop-effects No GPLv2+ desktop-file-utils Yes GPLv2+ devhelp No GPLv2+ device-mapper No GPLv2 device-mapper-event No GPLv2 device-mapper-event-libs No LGPLv2 device-mapper-libs No LGPLv2 device-mapper-multipath Yes GPL+ device-mapper-multipath-libs No GPL+ device-mapper-persistent-data Yes GPLv3+ dhclient Yes ISC dhcp Yes ISC dhcp-common No ISC dialog No LGPLv2 diffstat Yes MIT diffutils No GPLv2+ dmidecode No GPLv2+ dmraid Yes GPLv2+ dmraid-events No GPLv2+ dmz-cursor-themes No CC-BY-SA dnsmasq Yes GPLv2 or GPLv3 docbook-dtds No
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-base-workstation-variant
|
Chapter 1. Customizing the devfile and plug-in registries
|
Chapter 1. Customizing the devfile and plug-in registries CodeReady Workspaces 2.1 introduces two registries: the plug-ins registry and the devfile registry. They are static websites where the metadata of CodeReady Workspaces plug-ins and CodeReady Workspaces devfiles is published. The plug-in registry makes it possible to share a plug-in definition across all the users of the same instance of CodeReady Workspaces. Only plug-ins that are published in a registry can be used in a devfile. The devfile registry holds the definitions of the CodeReady Workspaces stacks. These are available on the CodeReady Workspaces user dashboard when selecting Create Workspace . It contains the list of CodeReady Workspaces technological stack samples with example projects. The devfile and plug-in registries run in two separate pods and are deployed when the CodeReady Workspaces server is deployed (that is the default behavior of the CodeReady Workspaces Operator). The metadata of the plug-ins and devfiles are versioned on GitHub and follow the CodeReady Workspaces server life cycle. In this document, the following two ways to customize the default registries that are deployed with CodeReady Workspaces (to modify the plug-ins or devfile metadata) are described: Building a custom image of the registries Running the default images but modifying them at runtime Building and running a custom registry image Including the plug-in binaries in the registry image Editing a devfile and plug-in at runtime 1.1. Building and running a custom registry image This section describes the building of registries and updating a running CodeReady Workspaces server to point to the registries. 1.1.1. Building a custom devfile registry This section describes how to build a custom devfiles registry. Following operations are covered: Getting a copy of the source code necessary to build a devfiles registry. Adding a new devfile. Building the devfiles registry. Procedure Clone the devfile registry repository: In the ./che-devfile-registry/devfiles/ directory, create a subdirectory <devfile-name> / and add the devfile.yaml and meta.yaml files. File organization for a devfile Add valid content in the devfile.yaml file. For a detailed description of the devfile format, see the Making a workspace portable using a devfile section. Ensure that the meta.yaml file conforms to the following structure: Table 1.1. Parameters for a devfile meta.yaml Attribute Description description Description as it appears on the user dashboard. displayName Name as it appears on the user dashboard. globalMemoryLimit The sum of the expected memory consumed by all the components launched by the devfile. This number will be visible on the user dashboard. It is informative and is not taken into account by the CodeReady Workspaces server. icon Link to an .svg file that is displayed on the user dashboard. tags List of tags. Tags usually include the tools included in the stack. Example devfile meta.yaml displayName: Rust description: Rust Stack with Rust 1.39 tags: ["Rust"] icon: https://www.eclipse.org/che/images/logo-eclipseche.svg globalMemoryLimit: 1686Mi Build the containers for the custom devfile registry: 1.1.2. Building a custom plug-in registry This section describes how to build a custom plug-in registry. Following operations are covered: Getting a copy of the source code necessary to build a custom plug-in registry. Adding a new plug-in. Building the custom plug-in registry. Procedure Clone the plug-in registry repository: In the ./che-plugin-registry/v3/plugins/ directory, create new directories <publisher> / <plugin-name> / <plugin-version> / and a meta.yaml file in the last directory. File organization for a plugin Add valid content to the meta.yaml file. See the "Using a Visual Studio Code extension in CodeReady Workspaces" section or the README.md file in the eclipse/che-plugin-registry repository for a detailed description of the meta.yaml file format. Create a file named latest.txt with content the name of the latest <plugin-version> directory. Example Build the containers for the custom plug-in registry: 1.1.3. Deploying the registries Prerequisites The my-plug-in-registry and my-devfile-registry images used in this section are built using the docker command. This section assumes that these images are available on the OpenShift cluster where CodeReady Workspaces is deployed. This is true on Minikube, for example, if before running the docker build commands, the user executed the eval USD\{minikube docker-env} command (or, the eval USD\{minishift docker-env} command for Minishift). Otherwise, these images can be pushed to a container registry (public, such as quay.io , or the DockerHub, or a private registry). 1.1.3.1. Deploying registries in OpenShift Procedure An OpenShift template to deploy the plug-in registry is available in the openshift/ directory of the GitHub repository. To deploy the plug-in registry using the OpenShift template, run the following command: 1 If installed using crwctl, the default CodeReady Workspaces namespace is workspaces . The OperatorHub installation method deploys CodeReady Workspaces to the users current namespace. The devfile registry has an OpenShift template in the deploy/openshift/ directory of the GitHub repository. To deploy it, run the command: 1 If installed using crwctl, the default CodeReady Workspaces namespace is workspaces . The OperatorHub installation method deploys CodeReady Workspaces to the users current namespace. Check if the registries are deployed successfully on OpenShift. To verify that the new plug-in is correctly published to the plug-in registry, make a request to the registry path /v3/plugins/index.json (or /devfiles/index.json for the devfile registry). Verify that the CodeReady Workspaces server points to the URL of the registry. To do this, compare the value of the CHE_WORKSPACE_PLUGIN__REGISTRY__URL parameter in the codeready ConfigMap (or CHE_WORKSPACE_DEVFILE__REGISTRY__URL for the devfile registry): with the URL of the route: If they do not match, update the ConfigMap and restart the CodeReady Workspaces server. When the new registries are deployed and the CodeReady Workspaces server is configured to use them, the new plug-ins are available in the Plugin view of a workspace and the new stacks are displayed in the New Workspace tab of the user dashboard. 1.2. Including the plug-in binaries in the registry image The plug-in registry only hosts CodeReady Workspaces plug-in metadata. The binaries are usually referred through a link in the meta.yaml file. Sometimes, such as offline environments, it may be necessary to make the binaries available inside the registry image. This section describes how to modify a plug-in meta.yaml file to point to a local file inside the container and rebuild a new registry that contains the modified plug-in meta.yaml file and the binary file. In the following example, the Java plug-in that refers to two remote VS Code extensions binaries is considered. Prerequisites CodeReady Workspaces is installed. The OpenShift command-line tool, oc , is installed. Procedure Download the binaries locally: Get the plug-in registry URL: For an Operator installation: Note that the obtained URL is without the http or https prefix. Update the URLs in the meta.yaml file, so that they point to the VS Code extension binaries that are saved in the registry container: Build and deploy the plug-in registry using the instructions in the Building and running a custom registry image section. 1.3. Editing a devfile and plug-in at runtime An alternative to building a custom registry image is to: Start a registry Modify its content at runtime This approach is simpler and faster. But the modifications are lost as soon as the container is deleted. 1.3.1. Adding a plug-in at runtime Procedure To add a plug-in: Check out the plugin registry sources. Create a meta.yaml in some local folder. This can be done from scratch or by copying from an existing plug-in's meta.yaml file. If copying from an existing plug-in, make changes to the meta.yaml file to suit your needs. Make sure your new plug-in has a unique title , displayName and description . Update the firstPublicationDate to today's date. These fields in meta.yaml must match the path defined in PLUGIN above. Get the name of the Pod that hosts the plug-in registry container. To do this, filter the component=plugin-registry label: Regenerate the registry's index.json file to include your new plug-in. Copy the new index.json and meta.yaml files from your new local plug-in folder to the container. The new plug-in can now be used from the existing CodeReady Workspaces instance's plug-in registry. To discover it, go to the CodeReady Workspaces dashboard, then click the Workspaces link. From there, click the gear icon to configure one of your workspaces. Select the Plugins tab to see the updated list of available plug-ins. 1.3.2. Adding a devfile at runtime Procedure To add a devfile: Check out the devfile registry sources. Create a devfile.yaml and meta.yaml in some local folder. This can be done from scratch or by copying from an existing devfile. If copying from an existing devfile, make changes to the devfile to suit your needs. Make sure your new devfile has a unique displayName and description . Get the name of the Pod that hosts the devfile registry container. To do this, filter the component=devfile-registry label: Regenerate the registry's index.json file to include your new devfile. Copy the new index.json , devfile.yaml and meta.yaml files from your new local devfile folder to the container. The new devfile can now be used from the existing CodeReady Workspaces instance's devfile registry. To discover it, go to the CodeReady Workspaces dashboard, then click the Workspaces link. From there, click Add Workspace to see the updated list of available devfiles.
|
[
"git clone [email protected]:redhat-developer/codeready-workspaces.git cd codeready-workspaces/dependencies/che-devfile-registry",
"./che-devfile-registry/devfiles/ └── <devfile-name> ├── devfile.yaml └── meta.yaml",
"displayName: Rust description: Rust Stack with Rust 1.39 tags: [\"Rust\"] icon: https://www.eclipse.org/che/images/logo-eclipseche.svg globalMemoryLimit: 1686Mi",
"docker build -t my-devfile-registry .",
"git clone [email protected]:redhat-developer/codeready-workspaces.git cd codeready-workspaces/dependencies/che-plugin-registry",
"./che-plugin-registry/v3/plugins/ ├── <publisher> │ └── <plugin-name> │ ├── <plugin-version> │ │ └── meta.yaml │ └── latest.txt",
"tree che-plugin-registry/v3/plugins/redhat/java/ che-plugin-registry/v3/plugins/redhat/java/ ├── 0.38.0 │ └── meta.yaml ├── 0.43.0 │ └── meta.yaml ├── 0.45.0 │ └── meta.yaml ├── 0.46.0 │ └── meta.yaml ├── 0.50.0 │ └── meta.yaml └── latest.txt cat che-plugin-registry/v3/plugins/redhat/java/latest.txt 0.50.0",
"docker build -t my-devfile-registry .",
"NAMESPACE= <namespace-name> 1 IMAGE_NAME=\"my-plug-in-registry\" IMAGE_TAG=\"latest\" new-app -f openshift/che-plugin-registry.yml -n \"USD\\{NAMESPACE}\" -p IMAGE=\"USD\\{IMAGE_NAME}\" -p IMAGE_TAG=\"USD\\{IMAGE_TAG}\" -p PULL_POLICY=\"IfNotPresent\"",
"NAMESPACE= <namespace-name> 1 IMAGE_NAME=\"my-devfile-registry\" IMAGE_TAG=\"latest\" new-app -f openshift/che-devfile-registry.yml -n \"USD\\{NAMESPACE}\" -p IMAGE=\"USD\\{IMAGE_NAME}\" -p IMAGE_TAG=\"USD\\{IMAGE_TAG}\" -p PULL_POLICY=\"IfNotPresent\"",
"URL=USD(oc get -o 'custom-columns=URL:.spec.rules[0].host' -l app=che-plugin-registry route --no-headers) INDEX_JSON=USD(curl -sSL http://USD{URL}/v3/plugins/index.json) echo USD{INDEX_JSON} | grep -A 4 -B 5 \"\\\"name\\\":\\\"my-plug-in\\\"\" ,\\{ \"id\": \"my-org/my-plug-in/1.0.0\", \"displayName\":\"This is my first plug-in for CodeReady Workspaces\", \"version\":\"1.0.0\", \"type\":\"VS Code extension\", \"name\":\"my-plug-in\", \"description\":\"This plugin shows that we are able to add plugins to the registry\", \"publisher\":\"my-org\", \"links\": \\{\"self\":\"/v3/plugins/my-org/my-plug-in/1.0.0\" } } -- -- ,\\{ \"id\": \"my-org/my-plug-in/latest\", \"displayName\":\"This is my first plug-in for CodeReady Workspaces\", \"version\":\"latest\", \"type\":\"VS Code extension\", \"name\":\"my-plug-in\", \"description\":\"This plugin shows that we are able to add plugins to the registry\", \"publisher\":\"my-org\", \"links\": \\{\"self\":\"/v3/plugins/my-org/my-plug-in/latest\" } }",
"oc get -o \"custom-columns=URL:.data['CHE_WORKSPACE_PLUGIN REGISTRY URL']\" --no-headers cm/che URL http://che-plugin-registry-che.192.168.99.100.mycluster.mycompany.com/v3",
"oc get -o 'custom-columns=URL:.spec.rules[0].host' -l app=che-plugin-registry route --no-headers che-plugin-registry-che.192.168.99.100.mycluster.mycompany.com",
"oc edit cm/che (...) oc scale --replicas=0 deployment/che oc scale --replicas=1 deployment/che",
"ORG=redhat NAME=java11 VERSION=latest URL_VS_CODE_EXT1=\"https://github.com/microsoft/vscode-java-debug/releases/download/0.19.0/vscode-java-debug-0.19.0.vsix[_https://github.com/microsoft/vscode-java-debug/releases/download/0.19.0/vscode-java-debug-0.19.0.vsix_]\" URL_VS_CODE_EXT2=\"https://download.jboss.org/jbosstools/static/jdt.ls/stable/java-0.46.0-1549.vsix[_https://download.jboss.org/jbosstools/static/jdt.ls/stable/java-0.46.0-1549.vsix_]\" VS_CODE_EXT1=https://github.com/microsoft/vscode-java-debug/releases/download/0.19.0/vscode-java-debug-0.19.0.vsix[_vscode-java-debug-0.19.0.vsix_] VS_CODE_EXT2=https://download.jboss.org/jbosstools/static/jdt.ls/stable/java-0.46.0-1549.vsix[_java-0.46.0-1549.vsix_] curl -sSL -o ./che-plugin-registry/v3/plugins/USD\\{ORG}/USD\\{NAME}/USD\\{VERSION}/USD\\{VS_CODE_EXT1} USD\\{URL_VS_CODE_EXT1} curl -sSL -o ./che-plugin-registry/v3/plugins/USD\\{ORG}/USD\\{NAME}/USD\\{VERSION}/USD\\{VS_CODE_EXT2} USD\\{URL_VS_CODE_EXT2}",
"oc get checluster USD{CHECLUSTER_NAME} -o jsonpath='{.status.pluginRegistryURL}' -n USD{CODEREADY_NAMESPACE}",
"NEW_URL_VS_CODE_EXT1=http://USD\\{PLUGIN_REG_URL}/v3/plugins/USD\\{ORG}/USD\\{NAME}/USD\\{VERSION}/USD\\{VS_CODE_EXT1} NEW_URL_VS_CODE_EXT2=http://USD\\{PLUGIN_REG_URL}/v3/plugins/USD\\{ORG}/USD\\{NAME}/USD\\{VERSION}/USD\\{VS_CODE_EXT2} sed -i -e 's/USD\\{URL_PLUGIN1}/USD\\{NEW_URL_VS_CODE_EXT1}/g' ./che-plugin-registry/v3/plugins/USD\\{ORG}/USD\\{NAME}/USD\\{VERSION}/meta.yaml sed -i -e 's/USD\\{URL_PLUGIN2}/USD\\{NEW_URL_VS_CODE_EXT2}/g' ./che-plugin-registry/v3/plugins/USD\\{ORG}/USD\\{NAME}/USD\\{VERSION}/meta.yaml",
"git clone https://github.com/eclipse/che-plugin-registry; cd che-plugin-registry",
"PLUGIN=\"v3/plugins/new-org/new-plugin/0.0.1\"; mkdir -p USD{PLUGIN}; cp v3/plugins/che-incubator/cpptools/0.1/* USD{PLUGIN}/ echo \"USD{PLUGIN##*/}\" > USD{PLUGIN}/../latest.txt",
"publisher: new-org name: new-plugin version: 0.0.1",
"PLUGIN_REG_POD=USD(oc get -o custom-columns=NAME:.metadata.name --no-headers pod -l component=plugin-registry)",
"cd che-plugin-registry; \"USD(pwd)/build/scripts/generate_latest_metas.sh\" v3 && \"USD(pwd)/build/scripts/check_plugins_location.sh\" v3 && \"USD(pwd)/build/scripts/set_plugin_dates.sh\" v3 && \"USD(pwd)/build/scripts/check_plugins_viewer_mandatory_fields.sh\" v3 && \"USD(pwd)/build/scripts/index.sh\" v3 > v3/plugins/index.json",
"cd che-plugin-registry; LOCAL_FILES=\"USD(pwd)/USD{PLUGIN}/meta.yaml USD(pwd)/v3/plugins/index.json\"; oc exec USD{PLUGIN_REG_POD} -i -t -- mkdir -p /var/www/html/USD{PLUGIN}; for f in USDLOCAL_FILES; do e=USD{f/USD(pwd)\\//}; echo \"Upload USD{f} -> /var/www/html/USD{e}\"; oc cp \"USD{f}\" USD{PLUGIN_REG_POD}:/var/www/html/USD{e}; done",
"git clone https://github.com/eclipse/che-devfile-registry; cd che-devfile-registry",
"STACK=\"new-stack\"; mkdir -p devfiles/USD{STACK}; cp devfiles/nodejs/* devfiles/USD{STACK}/",
"DEVFILE_REG_POD=USD(oc get -o custom-columns=NAME:.metadata.name --no-headers pod -l component=devfile-registry)",
"cd che-devfile-registry; \"USD(pwd)/build/scripts/check_mandatory_fields.sh\" devfiles; \"USD(pwd)/build/scripts/index.sh\" > index.json",
"cd che-devfile-registry; LOCAL_FILES=\"USD(pwd)/USD{STACK}/meta.yaml USD(pwd)/USD{STACK}/devfile.yaml USD(pwd)/index.json\"; oc exec USD{DEVFILE_REG_POD} -i -t -- mkdir -p /var/www/html/devfiles/USD{STACK}; for f in USDLOCAL_FILES; do e=USD{f/USD(pwd)\\//}; echo \"Upload USD{f} -> /var/www/html/devfiles/USD{e}\" oc cp \"USD{f}\" USD{DEVFILE_REG_POD}:/var/www/html/devfiles/USD{e}; done"
] |
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/administration_guide/customizing-the-devfile-and-plug-in-registries_crw
|
Chapter 9. Ceph File System snapshots
|
Chapter 9. Ceph File System snapshots As a storage administrator, you can take a point-in-time snapshot of a Ceph File System (CephFS) directory. CephFS snapshots are asynchronous, and you can choose which directory snapshots are created in. 9.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. 9.2. Ceph File System snapshots The Ceph File System (CephFS) snapshotting feature is enabled by default on new Ceph File Systems, but must be manually enabled on existing Ceph File Systems. CephFS snapshots create an immutable, point-in-time view of a Ceph File System. CephFS snapshots are asynchronous and are kept in a special hidden directory in the CephFS directory named .snap . You can specify snapshot creation for any directory within a Ceph File System. When specifying a directory, the snapshot also includes all the subdirectories beneath it. Warning Each Ceph Metadata Server (MDS) cluster allocates the snap identifiers independently. Using snapshots for multiple Ceph File Systems that are sharing a single pool causes snapshot collisions, and results in missing file data. Additional Resources See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Creating a snapshot schedule for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. 9.3. Creating a snapshot for a Ceph File System You can create an immutable, point-in-time view of a Ceph File System (CephFS) by creating a snapshot. Note For a new Ceph File System, snapshots are enabled by default. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Metadata Server (MDS) node. Procedure Log into the Cephadm shell: Example For existing Ceph File Systems, enable the snapshotting feature: Syntax Example Create a new snapshot subdirectory under the .snap directory: Syntax Example This example creates the new-snaps subdirectory, and this informs the Ceph Metadata Server (MDS) to start making snapshots. To delete snapshots: Syntax Example Important Attempting to delete root-level snapshots, which might contain underlying snapshots, will fail. Additional Resources See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System Guide for more details. See the Ceph File System snapshots section in the Red Hat Ceph Storage File System Guide for more details. 9.4. Additional Resources See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide .
|
[
"cephadm shell",
"ceph fs set FILE_SYSTEM_NAME allow_new_snaps true",
"ceph fs set cephfs01 allow_new_snaps true",
"mkdir NEW_DIRECTORY_PATH",
"mkdir /.snap/new-snaps",
"rmdir NEW_DIRECTORY_PATH",
"rmdir /.snap/new-snaps"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-snapshots
|
Chapter 33. extension
|
Chapter 33. extension This chapter describes the commands under the extension command. 33.1. extension list List API extensions Usage: Table 33.1. Optional Arguments Value Summary -h, --help Show this help message and exit --compute List extensions for the compute api --identity List extensions for the identity api --network List extensions for the network api --volume List extensions for the block storage api --long List additional fields in output Table 33.2. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 33.3. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 33.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 33.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 33.2. extension show Show API extension Usage: Table 33.6. Positional Arguments Value Summary <extension> Extension to display. currently, only network extensions are supported. (Name or Alias) Table 33.7. Optional Arguments Value Summary -h, --help Show this help message and exit Table 33.8. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 33.9. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 33.10. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 33.11. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack extension list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--compute] [--identity] [--network] [--volume] [--long]",
"openstack extension show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <extension>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/extension
|
Chapter 84. ExternalConfigurationEnv schema reference
|
Chapter 84. ExternalConfigurationEnv schema reference Used in: ExternalConfiguration Property Property type Description name string Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . valueFrom ExternalConfigurationEnvVarSource Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ExternalConfigurationEnv-reference
|
Chapter 7. Using Redis Cache with dynamic plugins
|
Chapter 7. Using Redis Cache with dynamic plugins You can use the Redis cache store to improve RHDH performance and reliability. Plugins in RHDH receive dedicated cache connections, which are powered by Keyv. 7.1. Installing Redis Cache in Red Hat Developer Hub Prerequisites You have installed Red Hat Developer Hub by using either the Operator or Helm chart. You have an active Redis server. For more information on setting up an external Redis server, see the Redis official documentation . Procedure Add the following code to your app-config.yaml file: backend: cache: store: redis connection: redis://user:[email protected]:6379 useRedisSets: true 7.2. Configuring Redis Cache in Red Hat Developer Hub 7.2.1. useRedisSets The useRedisSets option lets you decide whether to use Redis sets for key management. By default, this option is set to true . When useRedisSets is enabled ( true ): A namespace for the Redis sets is created, and all generated keys are added to that namespace, enabling group management of the keys. When a key is deleted, it's removed from the main storage and the Redis set. When using the clear function to delete all keys, every key in the Redis set is checked for deletion, and the set itself is also removed. Note In high-performance scenarios, enabling useRedisSets can result in memory leaks. If you are running a high-performance application or service, you must set useRedisSets to false . When you set useRedisSets to false , the keys are handled individually and Redis sets are not utilized. This configuration might lead to performance issues in production when using the clear function, as it requires iterating over all keys for deletion.
|
[
"backend: cache: store: redis connection: redis://user:[email protected]:6379 useRedisSets: true"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring/proc-installing-and-configuring-redis-cache_running-behind-a-proxy
|
35.4. Examining Log Files
|
35.4. Examining Log Files Log Viewer can be configured to display an alert icon beside lines that contain key alert words and a warning icon beside lines that contain key warning words. To add alerts words, select Edit => Preferences from the pulldown menu, and click on the Alerts tab. Click the Add button to add an alert word. To delete an alert word, select the word from the list, and click Delete . The alert icon is displayed to the left of the lines that contains any of the alert words. Figure 35.4. Alerts To add warning words, select Edit => Preferences from the pull-down menu, and click on the Warnings tab. Click the Add button to add a warning word. To delete a warning word, select the word from the list, and click Delete . The warning icon is displayed to the left of the lines that contains any of the warning words. Figure 35.5. Warning
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Log_Files-Examining_Log_Files
|
Chapter 1. Introduction to RHEL for Edge images
|
Chapter 1. Introduction to RHEL for Edge images A RHEL for Edge image is an rpm-ostree image that includes system packages to remotely install RHEL on Edge servers. The system packages include: Base OS package Podman as the container engine Additional RPM Package Manager (RPM) content RHEL for Edge is an immutable operating system that contains a read-only root directory, and has following characteristics: The packages are isolated from the root directory. Each version of the operating system is a separate deployment. Therefore, you can roll back the system to a deployment when needed. The rpm-ostree image offers efficient updates over the network. RHEL for Edge supports multiple operating system branches and repositories. The image contains a hybrid rpm-ostree package system. You can compose customized RHEL for Edge images by using the RHEL image builder tool. You can also create RHEL for Edge images by accessing the edge management application in the Red Hat Hybrid Cloud Console platform and configure automated management. Use the edge management application to simplify provisioning and registering your images. To learn more about the edge management, see the Create RHEL for Edge images and configure automated management documentation. Warning Using RHEL for Edge customized images that were created by using the RHEL image builder on-premise version is not supported in the edge management application. See Edge management supportability . The edge management application supports building and managing only the edge-commit and edge-installer image types. Additionally, you cannot use the FIDO Device Onboarding (FDO) process with images that you create by using the edge management application. With a RHEL for Edge image, you can achieve the following benefits: Atomic upgrades You know the state of each update, and no changes are seen until you reboot your system. Custom health checks and intelligent rollbacks You can create custom health checks, and if a health check fails, the operating system rolls back to the stable state. Container-focused workflow The image updates are staged in the background, minimizing any workload interruptions to the system. Optimized Over-the-Air updates You can make sure that your systems are up-to-date, even with intermittent connectivity, thanks to efficient over-the-air (OTA) delta updates. 1.1. RHEL for Edge-supported architecture Currently, you can deploy RHEL for Edge images on AMD and Intel 64-bit systems. Note RHEL for Edge does not support ARM systems in RHEL 8. 1.2. RHEL for Edge image types and their deployments Composing and deploying a RHEL for Edge image involves two phases: Composing a RHEL rpm-ostree image using the RHEL image builder tool. You can access RHEL image builder on the command line by using the composer-cli tool, or use a graphical user interface in the RHEL web console. Deploying the image by using RHEL installer. The image types vary in terms of their contents, and are therefore suitable for different types of deployment environments. While composing a RHEL for Edge image, you can select any of the following image types: RHEL for Edge Commit This image type delivers atomic and safe updates to a system. The edge-commit ( .tar ) image contains a full operating system, but it is not directly bootable. To boot the edge-commit image type, you must deploy it by using one of the other disk image types. You can also build edge-commit images on the edge management application. RHEL for Edge Container This image type serves the OSTree commits by using an integrated HTTP server. The edge-container creates an OSTree commit and embeds it into an OCI container with a web server. When the edge-commit image starts, the web server serves the commit as an OSTree repository. RHEL for Edge Installer The edge-installer image type is an Anaconda-based installer image that deploys a RHEL for Edge OSTree commit that is embedded in the installer. Besides building .iso images by using the RHEL image builder tool, you can also build RHEL for Edge installer ( edge-installer ) on the edge management application. The edge-installer image type is an Anaconda-based installer image that deploys a RHEL for Edge ostree commit that is embedded in the installer image. RHEL for Edge Raw Image Use for bare metal platforms by flashing the RHEL Raw Images on a hard disk or boot the Raw image on a virtual machine. The edge-raw-image is a compressed raw images that consist of a file containing a partition layout with an existing deployed OSTree commit in it. RHEL for Edge Simplified Installer Use the edge-simplified-installer image type for unattended installations, where user configuration is provided via FDO or Ignition. The edge-simplified-installer image can use Ignition to inject the user configuration into the images at an early stage of the boot process. Additionally, it is possible to use FDO as a way to inject user configuration during an early stage of the boot process. After booting the Edge Simplified Installer, it provisions the RHEL for Edge image to a device with the injected user configuration. RHEL for Edge AMI Use this image to launch an EC2 instance in AWS cloud. The edge-ami image uses the Ignition tool to inject the user configuration into the images at an early stage of the boot process. You can upload the .ami image to AWS and boot an EC2 instance in AWS. RHEL for Edge VMDK Use this image to load the image on vSphere and boot the image in a vSphere VM. The edge-vsphere image uses the Ignition tool to inject the user configuration into the images at an early stage of the boot process. Table 1.1. RHEL for Edge images type Image type File type Suitable for network-based deployments Suitable for non-network-based deployments RHEL for Edge Commit .tar Yes No RHEL for Edge Container .tar No Yes RHEL for Edge Installer .iso No Yes RHEL for Edge Raw Image .raw.xz Yes Yes RHEL for Edge Simplified Installer .iso Yes Yes RHEL for Edge AMI .ami Yes Yes RHEL for Edge VMDK .vmdk Yes Yes Additional resources Interactively installing RHEL from installation media 1.3. Non-network-based deployments Use RHEL image builder to create flexible RHEL rpm-ostree images to suit your requirements, and then use Anaconda to deploy them in your environment. You can access RHEL image builder through a command-line interface in the composer-cli tool, or use a graphical user interface in the RHEL web console. Composing and deploying a RHEL for Edge image in non-network-based deployments involves the following high-level steps: Install and register a RHEL system Install RHEL image builder Using RHEL image builder, create a blueprint with customizations for RHEL for Edge Container image Import the RHEL for Edge blueprint in RHEL image builder Create a RHEL for Edge image embed in an OCI container with a webserver ready to deploy the commit as an OSTree repository Download the RHEL for Edge Container image file Deploy the container serving a repository with the RHEL for Edge Container commit Using RHEL image builder, create another blueprint for RHEL for Edge Installer image Create a RHEL for Edge Installer image configured to pull the commit from the running container embedded with RHEL for Edge Container image Download the RHEL for Edge Installer image Run the installation 1.4. Network-based deployments Use RHEL image builder to create flexible RHEL rpm-ostree images to suit your requirements, and then use Anaconda to deploy them in your environment. RHEL image builder automatically identifies the details of your deployment setup and generates the image output as an edge-commit as a .tar file. You can access RHEL image builder through a command-line interface in the composer-cli tool, or use a graphical user interface in the RHEL web console. You can compose and deploy the RHEL for Edge image by performing the following high-level steps: For an attended installation Install and register a RHEL system Install RHEL image builder Using RHEL image builder, create a blueprint for RHEL for Edge image Import the RHEL for Edge blueprint in RHEL image builder Create a RHEL for Edge Commit ( .tar ) image Download the RHEL for Edge image file On the same system where you have installed RHEL image builder, install a web server that you want to serve the RHEL for Edge Commit content. For instructions, see Setting up and configuring NGINX Extract the RHEL for Edge Commit ( .tar ) content to the running web server Create a Kickstart file that pulls the OSTree content from the running web server. For details on how to modify the Kickstart to pull the OSTree content, see Extracting the RHEL for Edge image commit Boot the RHEL installer ISO on the edge device and provide the Kickstart to it. For an unattended installation, you can customize the RHEL installation ISO and embed the Kickstart file to it. 1.5. Difference between RHEL RPM images and RHEL for Edge images You can create RHEL system images in traditional package-based RPM format and also as RHEL for Edge ( rpm-ostree ) images. You can use the traditional package-based RPMs to deploy RHEL on traditional data centers. However, with RHEL for Edge images you can deploy RHEL on servers other than traditional data centers. These servers include systems where processing of large amounts of data is done closest to the source where data is generated, the Edge servers. The RHEL for Edge ( rpm-ostree ) images are not a package manager. They only support complete bootable file system trees, not individual files. These images do not have information regarding the individual files such as how these files were generated or anything related to their origin. The rpm-ostree images need a separate mechanism, the package manager, to install additional applications in the /var directory. With that, the rpm-ostree image keeps the operating system unchanged, while maintaining the state of the /var and /etc directories. The atomic updates enable rollbacks and background staging of updates. Refer to the following table to know how RHEL for Edge images differ from the package-based RHEL RPM images. Table 1.2. Difference between RHEL RPM images and RHEL for Edge images Key attributes RHEL RPM image RHEL for Edge image OS assembly You can assemble the packages locally to form an image. The packages are assembled in an OSTree which you can install on a system. OS updates You can use yum update to apply the available updates from the enabled repositories. You can use rpm-ostree upgrade to stage an update if any new commit is available in the OSTree remote at /etc/ostree/remotes.d/ . The update takes effect on system reboot. Repository The package contains YUM repositories The package contains OSTree remote repository User access permissions Read write Read-only ( /usr ) Data persistence You can mount the image to any non tmpfs mount point /etc & /var are read/write enabled and include persisting data.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_installing_and_managing_rhel_for_edge_images/introducing-rhel-for-edge-images_composing-installing-managing-rhel-for-edge-images
|
Chapter 9. Upgrading AMQ Streams
|
Chapter 9. Upgrading AMQ Streams AMQ Streams can be upgraded to version 2.1 to take advantage of new features and enhancements, performance improvements, and security options. As part of the upgrade, you upgrade Kafka to the latest supported version. Each Kafka release introduces new features, improvements, and bug fixes to your AMQ Streams deployment. AMQ Streams can be downgraded to the version if you encounter issues with the newer version. Released versions of AMQ Streams are available from the AMQ Streams software downloads page . Upgrade paths Two upgrade paths are possible: Incremental Upgrading AMQ Streams from the minor version to version 2.1. Multi-version Upgrading AMQ Streams from an old version to version 2.1 within a single upgrade (skipping one or more intermediate versions). For example, upgrading from AMQ Streams 1.8 directly to AMQ Streams 2.1. Kafka version support Kafka 3.1.0 is supported for production use. Kafka 3.0.0 is supported only for the purpose of upgrading to AMQ Streams 2.1. Note You can upgrade to a higher Kafka version as long as it is supported by your version of AMQ Streams. In some cases, you can also downgrade to a supported Kafka version. Downtime and availability If topics are configured for high availability, upgrading AMQ Streams should not cause any downtime for consumers and producers that publish and read data from those topics. Highly available topics have a replication factor of at least 3 and partitions distributed evenly among the brokers. Upgrading AMQ Streams triggers rolling updates, where all brokers are restarted in turn, at different stages of the process. During rolling updates, not all brokers are online, so overall cluster availability is temporarily reduced. A reduction in cluster availability increases the chance that a broker failure will result in lost messages. 9.1. Required upgrade sequence To upgrade brokers and clients without downtime, you must complete the AMQ Streams upgrade procedures in the following order: Make sure your Kubernetes cluster version is supported. AMQ Streams 2.1 is supported by OpenShift 4.6 to 4.10. You can upgrade Kubernetes with minimal downtime . When upgrading AMQ Streams from 1.7 or earlier, update existing custom resources to support the v1beta2 API version . Update your Cluster Operator to a new AMQ Streams version. Upgrade all Kafka brokers and client applications to the latest supported Kafka version. Optional: Upgrade consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances. 9.1.1. Cluster Operator upgrade options How you upgrade the Cluster Operator depends on the way you deployed it. Using installation files If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator . Using the OperatorHub If you deployed AMQ Streams from the OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the AMQ Streams operators to a new AMQ Streams version. Depending on your chosen upgrade strategy, after updating the channel, either: An automatic upgrade is initiated A manual upgrade will require approval before the installation begins For more information on using the OperatorHub to upgrade Operators, see Upgrading installed Operators in the OpenShift documentation. 9.1.2. Upgrading from AMQ Streams 1.7 or earlier using the OperatorHub Action required if upgrading from AMQ Streams 1.7 or earlier using the OperatorHub The Red Hat Integration - AMQ Streams Operator supports v1beta2 custom resources only. Before you upgrade the AMQ Streams Operator to version 2.1 in the OperatorHub, custom resources must be upgraded to v1beta2 . The v1beta2 API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, the v1alpha1 and v1beta1 API versions were removed from all AMQ Streams custom resources apart from KafkaTopic and KafkaUser . If you are upgrading from an AMQ Streams version prior to version 1.7: Upgrade to AMQ Streams 1.7. Download the Red Hat AMQ Streams API Conversion Tool provided with AMQ Streams 1.8 from the AMQ Streams software downloads page . Convert custom resources and CRDs to v1beta2 . For more information, see the AMQ Streams 1.7 upgrade documentation . In the OperatorHub, delete version 1.7.0 of the Red Hat Integration - AMQ Streams Operator . If it also exists, delete version 2.1.0 of the Red Hat Integration - AMQ Streams Operator . If it does not exist, go to the step. If the Approval Strategy for the AMQ Streams Operator was set to Automatic , version 2.1.0 of the operator might already exist in your cluster. If you did not convert custom resources and CRDs to the v1beta2 API version before release, the operator-managed custom resources and CRDs will be using the old API version. As a result, the 2.1.0 Operator is stuck in Pending status. In this situation, you need to delete version 2.1.0 of the Red Hat Integration - AMQ Streams Operator as well as version 1.7.0. If you delete both operators, reconciliations are paused until the new operator version is installed. Follow the step immediately so that any changes to custom resources are not delayed. In the OperatorHub, install version 2.1.0 of the Red Hat Integration - AMQ Streams Operator immediately. The installed 2.1.0 operator begins to watch the cluster and performs rolling updates. You might notice a temporary decrease in cluster performance during this process. Note As an alternative, you can install the custom resources from version 1.7, convert the resources, and then upgrade to 1.8 or newer. 9.2. Upgrading OpenShift with minimal downtime If you are upgrading OpenShift, refer to the OpenShift upgrade documentation to check the upgrade path and the steps to upgrade your nodes correctly. Before upgrading OpenShift, check the supported versions for your version of AMQ Streams . When performing your upgrade, you'll want to keep your Kafka clusters available. You can employ one of the following strategies: Configuring pod disruption budgets Rolling pods by one of these methods: Using the AMQ Streams Drain Cleaner Manually by applying an annotation to your pod You have to configure the pod disruption budget before using one of the methods to roll your pods. For Kafka to stay operational, topics must also be replicated for high availability. This requires topic configuration that specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... In a highly available environment, the Cluster Operator maintains a minimum number of in-sync replicas for topics during the upgrade process so that there is no downtime. 9.2.1. Rolling pods using the AMQ Streams Drain Cleaner You can use the AMQ Streams Drain Cleaner tool to evict nodes during an upgrade. The AMQ Streams Drain Cleaner annotates pods with a rolling update pod annotation. This informs the Cluster Operator to perform a rolling update of an evicted pod. A pod disruption budget allows only a specified number of pods to be unavailable at a given time. During planned maintenance of Kafka broker pods, a pod disruption budget ensures Kafka continues to run in a highly available environment. You specify a pod disruption budget using a template customization for a Kafka component. By default, pod disruption budgets allow only a single pod to be unavailable at a given time. To do this, you set maxUnavailable to 0 (zero). Reducing the maximum pod disruption budget to zero prevents voluntary disruptions, so pods must be evicted manually. Specifying a pod disruption budget apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... template: podDisruptionBudget: maxUnavailable: 0 # ... 9.2.2. Rolling pods manually while keeping topics available During an upgrade, you can trigger a manual rolling update of pods through the Cluster Operator. Using Pod resources, rolling updates restart the pods of resources with new pods. As with using the AMQ Streams Drain Cleaner, you'll need to set the maxUnavailable value to zero for the pod disruption budget. You need to watch the pods that need to be drained. You then add a pod annotation to make the update. Here, the annotation updates a Kafka broker. Performing a manual rolling update on a Kafka broker pod oc annotate pod <cluster_name> -kafka- <index> strimzi.io/manual-rolling-update=true You replace <cluster_name> with the name of the cluster. Kafka broker pods are named <cluster-name> -kafka- <index> , where <index> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0 . Additional resources OpenShift documentation Draining pods using the AMQ Streams Drain Cleaner Replicating topics for high availability PodDisruptionBudgetTemplate schema reference Performing a rolling update using a pod annotation 9.3. AMQ Streams custom resource upgrades When upgrading AMQ Streams to 2.1 from 1.7 or earlier, you must ensure that your custom resources are using API version v1beta2 . You must upgrade the Custom Resource Definitions and the custom resources before upgrading to AMQ Streams 1.8 or newer. To perform the upgrade, you can use the API conversion tool provided with AMQ Streams 1.7. For more information, see the AMQ Streams 1.7 upgrade documentation . 9.4. Upgrading the Cluster Operator This procedure describes how to upgrade a Cluster Operator deployment to use AMQ Streams 2.1. Follow this procedure if you deployed the Cluster Operator using the installation YAML files. The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation. Note Refer to the documentation supporting a specific version of AMQ Streams for information on how to upgrade to that version. Prerequisites An existing Cluster Operator deployment is available. You have downloaded the release artifacts for AMQ Streams 2.1 . Procedure Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the new version of the Cluster Operator. Update your custom resources to reflect the supported configuration options available for AMQ Streams version 2.1. Update the Cluster Operator. Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in. On Linux, use: On MacOS, use: If you modified one or more environment variables in your existing Cluster Operator Deployment , edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables. When you have an updated configuration, deploy it along with the rest of the installation resources: oc replace -f install/cluster-operator Wait for the rolling updates to complete. If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns an error message to say the version is not supported. Otherwise, no error message is returned. If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version: Edit the Kafka custom resource. Change the spec.kafka.version property to a supported Kafka version. If the error message is not returned, go to the step. You will upgrade the Kafka version later. Get the image for the Kafka pod to ensure the upgrade was successful: oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The image tag shows the new Operator version. For example: registry.redhat.io/amq7/amq-streams-kafka-30-rhel8:2.1.0 Your Cluster Operator was upgraded to version 2.1 but the version of Kafka running in the cluster it manages is unchanged. Following the Cluster Operator upgrade, you must perform a Kafka upgrade . 9.5. Upgrading Kafka After you have upgraded your Cluster Operator to 2.1, the step is to upgrade all Kafka brokers to the latest supported version of Kafka. Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers. The Cluster Operator initiates rolling updates based on the Kafka cluster configuration. If Kafka.spec.kafka.config contains... The Cluster Operator initiates... Both the inter.broker.protocol.version and the log.message.format.version . A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version . Changing each will trigger a further rolling update. Either the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. No configuration for the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. Important From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka. As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper. A single rolling update occurs even if the ZooKeeper version is unchanged. Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version. 9.5.1. Kafka versions Kafka's log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster. To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers). The following table shows the differences between Kafka versions: Kafka version Interbroker protocol version Log message format version ZooKeeper version 3.1.0 3.1 3.1 3.6.3 3.0.0 3.0 3.0 3.6.3 Inter-broker protocol version In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol . Each version of Kafka has a compatible version of the inter-broker protocol. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config . Log message format version When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with. The properties used to set a specific message format version are as follows: message.format.version property for topics log.message.format.version property for Kafka brokers From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don't need to be set. The values reflect the Kafka version used. When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version . Otherwise, set the message format version based on the Kafka version you are upgrading to. The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration. 9.5.2. Strategies for upgrading clients The right approach to upgrading your client applications (including Kafka Connect connectors) depends on your particular circumstances. Consuming applications need to receive messages in a message format that they understand. You can ensure that this is the case in one of two ways: By upgrading all the consumers for a topic before upgrading any of the producers. By having the brokers down-convert messages to an older format. Using broker down-conversion puts extra load on the brokers, so it is not ideal to rely on down-conversion for all topics for a prolonged period of time. For brokers to perform optimally they should not be down converting messages at all. Broker down-conversion is configured in two ways: The topic-level message.format.version configures it for a single topic. The broker-level log.message.format.version is the default for topics that do not have the topic-level message.format.version configured. Messages published to a topic in a new-version format will be visible to consumers, because brokers perform down-conversion when they receive messages from producers, not when they are sent to consumers. Common strategies you can use to upgrade your clients are described as follows. Other strategies for upgrading client applications are also possible. Important The steps outlined in each strategy change slightly when upgrading to Kafka 3.0.0 or later. From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don't need to be set. Broker-level consumers first strategy Upgrade all the consuming applications. Change the broker-level log.message.format.version to the new version. Upgrade all the producing applications. This strategy is straightforward, and avoids any broker down-conversion. However, it assumes that all consumers in your organization can be upgraded in a coordinated way, and it does not work for applications that are both consumers and producers. There is also a risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log so that you cannot revert to the consumer version. Topic-level consumers first strategy For each topic: Upgrade all the consuming applications. Change the topic-level message.format.version to the new version. Upgrade all the producing applications. This strategy avoids any broker down-conversion, and means you can proceed on a topic-by-topic basis. It does not work for applications that are both consumers and producers of the same topic. Again, it has the risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log. Topic-level consumers first strategy with down conversion For each topic: Change the topic-level message.format.version to the old version (or rely on the topic defaulting to the broker-level log.message.format.version ). Upgrade all the consuming and producing applications. Verify that the upgraded applications function correctly. Change the topic-level message.format.version to the new version. This strategy requires broker down-conversion, but the load on the brokers is minimized because it is only required for a single topic (or small group of topics) at a time. It also works for applications that are both consumers and producers of the same topic. This approach ensures that the upgraded producers and consumers are working correctly before you commit to using the new message format version. The main drawback of this approach is that it can be complicated to manage in a cluster with many topics and applications. Note It is also possible to apply multiple strategies. For example, for the first few applications and topics the "per-topic consumers first, with down conversion" strategy can be used. When this has proved successful another, more efficient strategy can be considered acceptable to use instead. 9.5.3. Kafka version and image mappings When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property. Each Kafka resource can be configured with a Kafka.spec.kafka.version . The Cluster Operator's STRIMZI_KAFKA_IMAGES environment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a given Kafka resource. If Kafka.spec.kafka.image is not configured, the default image for the given version is used. If Kafka.spec.kafka.image is configured, the default image is overridden. Warning The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version. 9.5.4. Upgrading Kafka brokers and client applications This procedure describes how to upgrade an AMQ Streams Kafka cluster to the latest supported Kafka version. Compared to your current Kafka version, the new version might support a higher log message format version or inter-broker protocol version , or both. Follow the steps to upgrade these versions, if required. For more information, see Section 9.5.1, "Kafka versions" . You should also choose a strategy for upgrading clients . Kafka clients are upgraded in step 6 of this procedure. Prerequisites For the Kafka resource to be upgraded, check that: The Cluster Operator, which supports both versions of Kafka, is up and running. The Kafka.spec.kafka.config does not contain options that are not supported in the new Kafka version. Procedure Update the Kafka cluster configuration: oc edit kafka my-cluster If configured, ensure that Kafka.spec.kafka.config has the log.message.format.version and inter.broker.protocol.version set to the defaults for the current Kafka version. For example, if upgrading from Kafka version 3.0.0 to 3.1.0: kind: Kafka spec: # ... kafka: version: 3.0.0 config: log.message.format.version: "3.0" inter.broker.protocol.version: "3.0" # ... If log.message.format.version and inter.broker.protocol.version are not configured, AMQ Streams automatically updates these versions to the current defaults after the update to the Kafka version in the step. Note The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers. Change the Kafka.spec.kafka.version to specify the new Kafka version; leave the log.message.format.version and inter.broker.protocol.version at the defaults for the current Kafka version. Note Changing the kafka.version ensures that all brokers in the cluster will be upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving the inter.broker.protocol.version unchanged ensures that the brokers can continue to communicate with each other throughout the upgrade. For example, if upgrading from Kafka 3.0.0 to 3.1.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.1.0 1 config: log.message.format.version: "3.0" 2 inter.broker.protocol.version: "3.0" 3 # ... 1 Kafka version is changed to the new version. 2 Message format version is unchanged. 3 Inter-broker protocol version is unchanged. Warning You cannot downgrade Kafka if the inter.broker.protocol.version for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to __consumer_offsets . The downgraded cluster will not understand the messages. If the image for the Kafka cluster is defined in the Kafka custom resource, in Kafka.spec.kafka.image , update the image to point to a container image with the new Kafka version. See Kafka version and image mappings Save and exit the editor, then wait for rolling updates to complete. Check the progress of the rolling updates by watching the pod state transitions: oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka. Depending on your chosen strategy for upgrading clients , upgrade all client applications to use the new version of the client binaries. If required, set the version property for Kafka Connect and MirrorMaker as the new version of Kafka: For Kafka Connect, update KafkaConnect.spec.version . For MirrorMaker, update KafkaMirrorMaker.spec.version . For MirrorMaker 2.0, update KafkaMirrorMaker2.spec.version . If configured, update the Kafka resource to use the new inter.broker.protocol.version version. Otherwise, go to step 9. For example, if upgrading to Kafka 3.1.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.1.0 config: log.message.format.version: "3.0" inter.broker.protocol.version: "3.1" # ... Wait for the Cluster Operator to update the cluster. If configured, update the Kafka resource to use the new log.message.format.version version. Otherwise, go to step 10. For example, if upgrading to Kafka 3.1.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.1.0 config: log.message.format.version: "3.1" inter.broker.protocol.version: "3.1" # ... Important From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. Wait for the Cluster Operator to update the cluster. The Kafka cluster and clients are now using the new Kafka version. The brokers are configured to send messages using the inter-broker protocol version and message format version of the new version of Kafka. Following the Kafka upgrade, if required, you can: Upgrade consumers to use the incremental cooperative rebalance protocol 9.6. Upgrading consumers to cooperative rebalancing You can upgrade Kafka consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances instead of the default eager rebalance protocol. The new protocol was added in Kafka 2.4.0. Consumers keep their partition assignments in a cooperative rebalance and only revoke them at the end of the process, if needed to achieve a balanced cluster. This reduces the unavailability of the consumer group or Kafka Streams application. Note Upgrading to the incremental cooperative rebalance protocol is optional. The eager rebalance protocol is still supported. Prerequisites You have upgraded Kafka brokers and client applications to Kafka 3.1.0. Procedure To upgrade a Kafka consumer to use the incremental cooperative rebalance protocol: Replace the Kafka clients .jar file with the new version. In the consumer configuration, append cooperative-sticky to the partition.assignment.strategy . For example, if the range strategy is set, change the configuration to range, cooperative-sticky . Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart. Reconfigure each consumer in the group by removing the earlier partition.assignment.strategy from the consumer configuration, leaving only the cooperative-sticky strategy. Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart. To upgrade a Kafka Streams application to use the incremental cooperative rebalance protocol: Replace the Kafka Streams .jar file with the new version. In the Kafka Streams configuration, set the upgrade.from configuration parameter to the Kafka version you are upgrading from (for example, 2.3). Restart each of the stream processors (nodes) in turn. Remove the upgrade.from configuration parameter from the Kafka Streams configuration. Restart each consumer in the group in turn.
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # template: podDisruptionBudget: maxUnavailable: 0",
"annotate pod <cluster_name> -kafka- <index> strimzi.io/manual-rolling-update=true",
"sed -i 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"replace -f install/cluster-operator",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"registry.redhat.io/amq7/amq-streams-kafka-30-rhel8:2.1.0",
"edit kafka my-cluster",
"kind: Kafka spec: # kafka: version: 3.0.0 config: log.message.format.version: \"3.0\" inter.broker.protocol.version: \"3.0\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.1.0 1 config: log.message.format.version: \"3.0\" 2 inter.broker.protocol.version: \"3.0\" 3 #",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.1.0 config: log.message.format.version: \"3.0\" inter.broker.protocol.version: \"3.1\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.1.0 config: log.message.format.version: \"3.1\" inter.broker.protocol.version: \"3.1\" #"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-upgrade-str
|
C.2. Identity Management Log Files and Directories
|
C.2. Identity Management Log Files and Directories Table C.9. IdM Server and Client Log Files and Directories Directory or File Description /var/log/ipaserver-install.log The installation log for the IdM server. /var/log/ipareplica-install.log The installation log for the IdM replica. /var/log/ipaclient-install.log The installation log for the IdM client. /var/log/sssd/ Log files for SSSD. ~/.ipa/log/cli.log The log file for errors returned by XML-RPC calls and responses by the ipa utility. Created in the home directory for the system user who runs the tools, who might have a different user name than the IdM user. /etc/logrotate.d/ The log rotation policies for DNS, SSSD, Apache, Tomcat, and Kerberos. /etc/pki/pki-tomcat/logging.properties This link points to the default Certificate Authority logging configuration at /usr/share/pki/server/conf/logging.properties . Table C.10. Apache Server Log Files Directory or File Description /var/log/httpd/ Log files for the Apache web server. /var/log/httpd/access_log Standard access and error logs for Apache servers. Messages specific to IdM are recorded along with the Apache messages because the IdM web UI and the XML-RPC command-line interface use Apache. /var/log/httpd/error_log For details, see Log Files in the Apache documentation. Table C.11. Certificate System Log Files Directory or File Description /var/log/pki/pki-ca-spawn. time_of_installation .log The installation log for the IdM CA. /var/log/pki/pki-kra-spawn. time_of_installation .log The installation log for the IdM KRA. /var/log/pki/pki-tomcat/ The top level directory for PKI operation logs. Contains CA and KRA logs. /var/log/pki/pki-tomcat/ca/ Directory with logs related to certificate operations. In IdM, these logs are used for service principals, hosts, and other entities which use certificates. /var/log/pki/pki-tomcat/kra Directory with logs related to KRA. /var/log/messages Includes certificate error messages among other system messages. For details, see Configuring Subsystem Logs in the Red Hat Certificate System Administration Guide . Table C.12. Directory Server Log Files Directory or File Description /var/log/dirsrv/slapd- REALM_NAME / Log files associated with the Directory Server instance used by the IdM server. Most operational data recorded here are related to server-replica interactions. /var/log/dirsrv/slapd- REALM_NAME /access Contain detailed information about attempted access and operations for the domain Directory Server instance. /var/log/dirsrv/slapd- REALM_NAME /errors /var/log/dirsrv/slapd- REALM_NAME /audit Contains audit trails of all Directory Server operations when auditing is enabled in the Directory Server configuration. For details, see Monitoring Server and Database Activity and Log File Reference in the Red Hat Directory Server documentation. Table C.13. Kerberos Log Files Directory or File Description /var/log/krb5kdc.log The primary log file for the Kerberos KDC server. /var/log/kadmind.log The primary log file for the Kerberos administration server. Locations for these files is configured in the krb5.conf file. They can be different on some systems. Table C.14. DNS Log Files Directory or File Description /var/log/messages Includes DNS error messages among other system messages. DNS logging in this file is not enabled by default. To enable it, run the # /usr/sbin/rndc querylog command. To disable logging, run the command again. Table C.15. Custodia Log Files Directory or File Description /var/log/custodia/ Log file directory for the Custodia service. Additional Resources See Using the Journal in the System Administrator's Guide for information on how to use the journalctl utility. You can use journalctl to view the logging output of systemd unit files.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/log-file-ref
|
4.38. ctdb
|
4.38. ctdb 4.38.1. RHBA-2011:1574 - ctdb bug fix and enhancement update Updated ctdb packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The ctdb packages provide a clustered database based on Samba's Trivial Database (TDB) used to store temporary data. The ctdb packages have been upgraded to upstream version 1.0.114, which provides a number of bug fixes over the version. (BZ# 701944 ) Bug Fix BZ# 728545 Prior to this update, the ctdb daemon leaked a file descriptor to anon_inodefs. This update modifies ctdb so that this file discriptor can no longer leak. Enhancement BZ# 672641 This update adds support for Clustered Samba on top of GFS2 as a Technology Preview. All users of ctdb are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/ctdb
|
Chapter 28. Billing API
|
Chapter 28. Billing API The Billing API provides a way to automate common billing processes. All the endpoints of the Billing API can be found in the Admin Portal under Documentation (?) > 3scale API Docs > Billing API . The Billing API requires a valid access token which meets the following requirements: it should belong to either an admin user of the provider account, or a member user with "Billing" permissions. it should include "Billing API" scope. Note that when an invoice ID is required as a parameter, it refers to the invoice ID, and not the Friendly invoice ID. The XML response of the API endpoints is mostly self-explanatory, and the fields of the Invoice represent the same information as in the web and PDF representation. Some notable fields of the response: creation_type : can have the following values: 'manual' for invoices created manually or 'background' for invoices created by the 3scale automated billing process. provider : the details of the API provider (the admin account), corresponds to the Issued by section of the invoice. buyer : the details of the developer account, corresponds to the Issued to section of the invoice. The XML representation of the invoice also includes the list of Line Items under the line-items field. For some line items, typically the ones created automatically, apart from the expected name, description, quantity and cost price, you can see the following: type : the type of the line item, can have the following values: LineItem::PlanCost - for line items for fixed plan costs. LineItem::VariableCost - for line items for variable costs. metric_id : for variable costs line items - the ID of the metric that the cost is associated with. contract_id : the ID of the service or application that the cost is associated with.
| null |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/billing-api
|
Chapter 1. Embedding in a RHEL for Edge image using image builder
|
Chapter 1. Embedding in a RHEL for Edge image using image builder Use this guide to build a RHEL image containing MicroShift. 1.1. Preparing for image building Use the image builder tool to compose customized Red Hat Enterprise Linux for Edge (RHEL for Edge) images optimized for edge deployments. You can run a MicroShift cluster with your applications on a RHEL for Edge virtual machine for development and testing first, then use your whole solution in edge production environments. Use the following RHEL documentation to understand the full details of using RHEL for Edge: Read Introduction to RHEL for Edge images . To build an Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.4 image for a given CPU architecture, you need a RHEL 9.4 build host of the same CPU architecture that meets the Image builder system requirements . Follow the instructions in Installing image builder to install image builder and the composer-cli tool. 1.2. Enabling extended support repositories for image building If you have an extended support (EUS) release of MicroShift or Red Hat Enterprise Linux (RHEL), you must enable the RHEL EUS repositories for image builder to use. If you do not have an EUS version, you can skip these steps. Prerequisites You have an EUS version of MicroShift or RHEL or are updating to one. You have root-user access to your build host. You reviewed the Red Hat Device Edge release compatibility matrix . Warning Keeping component versions in a supported configuration of Red Hat Device Edge can require updating MicroShift and RHEL at the same time. Ensure that your version of RHEL is compatible with the version of MicroShift you are updating to, especially if you are updating MicroShift across two minor versions. Otherwise, you can create an unsupported configuration, break your cluster, or both. For more information, see the Red Hat Device Edge release compatibility matrix . Procedure Create the /etc/osbuild-composer/repositories directory by running the following command: USD sudo mkdir -p /etc/osbuild-composer/repositories Copy the /usr/share/osbuild-composer/repositories/rhel-9.4.json file into the /etc/osbuild-composer/repositories directory by running the following command: USD sudo cp /usr/share/osbuild-composer/repositories/rhel-9.4.json /etc/osbuild-composer/repositories/rhel-9.4.json Update the baseos source by modifying the /etc/osbuild-composer/repositories/rhel-9.4.json file with the following values: # ... "baseurl": "https://cdn.redhat.com/content/eus/rhel<9>/<9.4>//baseos/os", 1 # ... 1 Replace <9> with the major RHEL version you are using, and replace <9.4> with the <major.minor> version. Be certain that the RHEL version you choose is compatible with the MicroShift version you are using. Optional. Apply the baseos update by running the following command: USD sudo sed -i "s,dist/rhel<9>/<9.4>/USD(uname -m)/baseos/,eus/rhel<9>/<9.4>/USD(uname -m)/baseos/,g" \ /etc/osbuild-composer/repositories/rhel-<9.4>.json 1 1 Replace <9> with the major RHEL version you are using, and replace <9.4> with the <major.minor> version. Be certain that the RHEL version you choose is compatible with the MicroShift version you are using. Update the appstream source by modifying the /etc/osbuild-composer/repositories/rhel-<major.minor>.json file with the following values: # ... "baseurl": "https://cdn.redhat.com/content/eus/rhel<9>/<9.4>//appstream/os", 1 # ... 1 Replace <9> with the major RHEL version you are using, and replace <9.4> with the <major.minor> version. Be certain that the RHEL version you choose is compatible with the MicroShift version you are using. Optional. Apply the appstream update by running the following command: USD sudo sed -i "s,dist/rhel<9>/<9.4>/USD(uname -m)/appstream/,eus/rhel<9>/<9.4>/USD(uname -m)/appstream/,g" \ /etc/osbuild-composer/repositories/rhel-<9.4>.json 1 1 Replace <9> with the major RHEL version you are using, and replace <9.4> with the <major.minor> version. Be certain that the RHEL version you choose is compatible with the MicroShift version you are using. Verification You can verify the repositories by using the composer-cli tool to display information about the source. Verify the baseos source by running the following command: USD sudo composer-cli sources info baseos | grep 'url =' Example output url = "https://cdn.redhat.com/content/eus/rhel9/9.4/x86_64/baseos/os" Verify the appstream source by running the following command: USD sudo composer-cli sources info appstream | grep 'url =' Example output url = "https://cdn.redhat.com/content/eus/rhel9/9.4/x86_64/appstream/os" 1.3. Adding MicroShift repositories to image builder Use the following procedure to add the MicroShift repositories to image builder on your build host. Prerequisites Your build host meets the image builder system requirements. You have installed and set up image builder and the composer-cli tool. You have root-user access to your build host. Procedure Create an image builder configuration file for adding the rhocp-4.18 RPM repository source required to pull MicroShift RPMs by running the following command: cat > rhocp-4.18.toml <<EOF id = "rhocp-4.18" name = "Red Hat OpenShift Container Platform 4.18 for RHEL 9" type = "yum-baseurl" url = "https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/rhocp/4.18/os" check_gpg = true check_ssl = true system = false rhsm = true EOF Create an image builder configuration file for adding the fast-datapath RPM repository by running the following command: cat > fast-datapath.toml <<EOF id = "fast-datapath" name = "Fast Datapath for RHEL 9" type = "yum-baseurl" url = "https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/fast-datapath/os" check_gpg = true check_ssl = true system = false rhsm = true EOF Add the sources to the image builder by running the following commands: USD sudo composer-cli sources add rhocp-4.18.toml USD sudo composer-cli sources add fast-datapath.toml Verification Confirm that the sources were added properly by running the following command: USD sudo composer-cli sources list Example output appstream baseos fast-datapath rhocp-4.18 Additional resources Image builder system requirements Installing image builder 1.4. Adding the MicroShift service to a blueprint Adding the MicroShift RPM package to an image builder blueprint enables the build of a RHEL for Edge image with MicroShift embedded. Start with step 1 to create your own minimal blueprint file which results in a faster MicroShift installation. Start with step 2 to use the generated blueprint for installation which includes all the RPM packages and container images. This is a longer installation process, but a faster start up because container references are accessed locally. Important Replace <microshift_blueprint.toml> in the following procedures with the name of the TOML file you are using. Replace <microshift_blueprint> in the following procedures with the name you want to use for your blueprint. Procedure Use the following example to create your own blueprint file: Custom image builder blueprint example cat > <microshift_blueprint.toml> <<EOF 1 name = " <microshift_blueprint> " 2 description = "" version = "0.0.1" modules = [] groups = [] [[packages]] name = "microshift" version = "4.18.1" 3 [customizations.services] enabled = ["microshift"] EOF 1 The name of the TOML file. 2 The name of the blueprint. 3 Substitute the value for the version you want. For example, insert 4.18.1 to download the MicroShift 4.18.1 RPMs. Optional. Use the blueprint installed in the /usr/share/microshift/blueprint directory that is specific to your platform architecture. See the following example snippet for an explanation of the blueprint sections: Generated image builder blueprint example snippet name = "microshift_blueprint" description = "MicroShift 4.17.1 on x86_64 platform" version = "0.0.1" modules = [] groups = [] [[packages]] 1 name = "microshift" version = "4.17.1" ... ... [customizations.services] 2 enabled = ["microshift"] [customizations.firewall] ports = ["22:tcp", "80:tcp", "443:tcp", "5353:udp", "6443:tcp", "30000-32767:tcp", "30000-32767:udp"] ... ... [[containers]] 3 source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4" [[containers]] source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd" ... ... EOF 1 References for all non-optional MicroShift RPM packages using the same version compatible with the microshift-release-info RPM. 2 References for automatically enabling MicroShift on system startup and applying default networking settings. 3 References for all non-optional MicroShift container images necessary for an offline deployment. Add the blueprint to the image builder by running the following command: USD sudo composer-cli blueprints push <microshift_blueprint.toml> 1 1 Replace <microshift_blueprint.toml> with the name of your TOML file. Verification Verify the image builder configuration listing only MicroShift packages by running the following command: USD sudo composer-cli blueprints depsolve <microshift_blueprint> | grep microshift 1 1 Replace <microshift_blueprint> with the name of your blueprint. Example output blueprint: microshift_blueprint v0.0.1 microshift-greenboot-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-networking-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-release-info-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-selinux-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch Optional: Verify the image builder configuration listing all components to be installed by running the following command: USD sudo composer-cli blueprints depsolve <microshift_blueprint> 1 1 Replace <microshift_blueprint> with the name of your blueprint. 1.5. Adding other packages to a blueprint Add the references for optional RPM packages to your ostree blueprint to enable them. Prerequisites You created an image builder blueprint file. Procedure Edit your ostree blueprint by running the following command: USD vi <microshift_blueprint.toml> 1 1 Replace <microshift_blueprint.toml> with the name of the blueprint file used for the MicroShift service. Add the following example text to your blueprint: [[packages]] 1 name = " <microshift-additional-package-name> " 2 version = "*" 1 Include one stanza for each additional service that you want to add. 2 Replace <microshift-additional-package-name> with the name the RPM for the service you want to include. For example, microshift-olm . steps Add custom certificate authorities to the blueprint as needed. After you are done adding to your blueprint, you can apply the manifests to an active cluster by building a new ostree system and deploying it on the client: Create the ISO. Add the blueprint and build the ISO. Download the ISO and prepare it for use. Do any provisioning that is needed. Additional resources Blueprint Reference Creating a RHEL for Edge Container blueprint using image builder CLI Building OSTree image Installing Podman 1.6. Adding a certificate authority bundle MicroShift uses the host trust bundle when clients evaluate server certificates. You can also use a customized security certificate chain to improve the compatibility of your endpoint certificates with clients specific to your deployments. To do this, you can add a certificate authority (CA) bundle with root and intermediate certificates to the Red Hat Enterprise Linux for Edge (RHEL for Edge) system-wide trust store. 1.6.1. Adding a certificate authority bundle to an rpm-ostree image You can include additional trusted certificate authorities (CAs) to the Red Hat Enterprise Linux for Edge (RHEL for Edge) rpm-ostree image by adding them to the blueprint that you use to create the image. Using the following procedure sets up additional CAs to be trusted by the operating system when pulling images from an image registry. Note This procedure requires you to configure the CA bundle customizations in the blueprint, and then add steps to your Kickstart file to enable the bundle. In the following steps, data is the key, and <value> represents the PEM-encoded certificate. Prerequisites You have root user access to your build host. Your build host meets the image builder system requirements. You have installed and set up image builder and the composer-cli tool. Procedure Add the following custom values to your blueprint to add a directory. Add instructions to your blueprint on the host where the image is built to create the directory, for example, /etc/pki/ca-trust/source/anchors/ for your certificate bundles. [[customizations.directories]] path = "/etc/pki/ca-trust/source/anchors" After the image has booted, create the certificate bundles, for example, /etc/pki/ca-trust/source/anchors/cert1.pem : [[customizations.files]] path = "/etc/pki/ca-trust/source/anchors/cert1.pem" data = "<value>" To enable the certificate bundle in the system-wide trust store configuration, use the update-ca-trust command on the host where the image you are using has booted, for example: USD sudo update-ca-trust Note The update-ca-trust command might be included in the %post section of a Kickstart file used for MicroShift host installation so that all the necessary certificate trust is enabled on the first boot. You must configure the CA bundle customizations in the blueprint before adding steps to your Kickstart file to enable the bundle. %post # Update certificate trust storage in case new certificates were # installed at /etc/pki/ca-trust/source/anchors directory update-ca-trust %end Additional resources Creating the RHEL for Edge image Using Shared System Certificates (RHEL 9) Supported image customizations (RHEL 9) Creating and managing OSTree image updates Applying updates on an OSTree system 1.7. Creating the RHEL for Edge image with image builder Use the following procedure to create the ISO. The RHEL for Edge Installer image pulls the commit from the running container and creates an installable boot ISO with a Kickstart file configured to use the embedded rpm-ostree commit. Prerequisites Your build host meets the image builder system requirements. You installed and set up image builder and the composer-cli tool. You root-user access to your build host. You installed the podman tool. Procedure Start an ostree container image build by running the following command: USD BUILDID=USD(sudo composer-cli compose start-ostree --ref "rhel/{op-system-version-major}/USD(uname -m)/edge" <microshift_blueprint> edge-container | awk '/^Compose/ {print USD2}') 1 1 Replace <microshift_blueprint> with the name of your blueprint. This command also returns the identification (ID) of the build for monitoring. You can check the status of the build periodically by running the following command: USD sudo composer-cli compose status Example output of a running build ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 microshift_blueprint 0.0.1 edge-container Example output of a completed build ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 microshift_blueprint 0.0.1 edge-container Note You can use the watch command to monitor your build if you are familiar with how to start and stop it. Download the container image using the ID and get the image ready for use by running the following command: USD sudo composer-cli compose image USD{BUILDID} Change the ownership of the downloaded container image to the current user by running the following command: USD sudo chown USD(whoami). USD{BUILDID}-container.tar Add read permissions for the current user to the image by running the following command: USD sudo chmod a+r USD{BUILDID}-container.tar Bootstrap a server on port 8085 for the ostree container image to be consumed by the ISO build by completing the following steps: Get the IMAGEID variable result by running the following command: USD IMAGEID=USD(cat < "./USD{BUILDID}-container.tar" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*') Use the IMAGEID variable result to execute the podman command step by running the following command: USD sudo podman run -d --name=minimal-microshift-server -p 8085:8080 USD{IMAGEID} This command also returns the ID of the container saved in the IMAGEID variable for monitoring. Generate the installer blueprint file by running the following command: cat > microshift-installer.toml <<EOF name = "microshift-installer" description = "" version = "0.0.0" modules = [] groups = [] packages = [] EOF 1.8. Add the blueprint to image builder and build the ISO Add the blueprint to the image builder by running the following command: USD sudo composer-cli blueprints push microshift-installer.toml Start the ostree ISO build by running the following command: USD BUILDID=USD(sudo composer-cli compose start-ostree --url http://localhost:8085/repo/ --ref "rhel/9/USD(uname -m)/edge" microshift-installer edge-installer | awk '{print USD2}') This command also returns the identification (ID) of the build for monitoring. You can check the status of the build periodically by running the following command: USD sudo composer-cli compose status Example output for a running build ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d RUNNING Wed Jun 7 13:22:20 2023 microshift-installer 0.0.0 edge-installer Example output for a completed build ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d FINISHED Wed Jun 7 13:34:49 2023 microshift-installer 0.0.0 edge-installer 1.9. Download the ISO and prepare it for use Download the ISO using the ID by running the following command: USD sudo composer-cli compose image USD{BUILDID} Change the ownership of the downloaded container image to the current user by running the following command: USD sudo chown USD(whoami). USD{BUILDID}-installer.iso Add read permissions for the current user to the image by running the following command: USD sudo chmod a+r USD{BUILDID}-installer.iso steps Provision a virtual machine with a Kickstart file. 1.9.1. Embedding a Kickstart file in an ISO You can use the Kickstart file provided with MicroShift, or you can update an existing RHEL for Edge Installer (ISO) Kickstart file. When ready, embed the Kickstart file into the ISO. Your Kickstart file must include detailed instructions about how to create a user and how to fetch and deploy the RHEL for Edge image. Prerequisites You created a RHEL for Edge Installer (ISO) image containing your RHEL for Edge commit with MicroShift. You have an existing Kickstart file ready for updating. You can use the microshift-starter.ks Kickstart file provided with the MicroShift RPMs. Procedure In the main section of the Kickstart file, update the setup of the filesystem such that it contains an LVM volume group called rhel with at least 10GB system root. Leave free space for the LVMS CSI driver to use for storing the data for your workloads. Example Kickstart file snippet for configuring the filesystem # Partition disk such that it contains an LVM volume group called `rhel` with a # 10GB+ system root but leaving free space for the LVMS CSI driver for storing data. # # For example, a 20GB disk would be partitioned in the following way: # # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT # sda 8:0 0 20G 0 disk # ├─sda1 8:1 0 200M 0 part /boot/efi # ├─sda1 8:1 0 800M 0 part /boot # └─sda2 8:2 0 19G 0 part # └─rhel-root 253:0 0 10G 0 lvm /sysroot # ostreesetup --nogpg --osname=rhel --remote=edge \ --url=file:///run/install/repo/ostree/repo --ref=rhel/<RHEL VERSION NUMBER>/x86_64/edge zerombr clearpart --all --initlabel part /boot/efi --fstype=efi --size=200 part /boot --fstype=xfs --asprimary --size=800 # Uncomment this line to add a SWAP partition of the recommended size #part swap --fstype=swap --recommended part pv.01 --grow volgroup rhel pv.01 logvol / --vgname=rhel --fstype=xfs --size=10000 --name=root # To add users, use a line such as the following user --name=<YOUR_USER_NAME> \ --password=<YOUR_HASHED_PASSWORD> \ --iscrypted --groups=<YOUR_USER_GROUPS> In the %post section of the Kickstart file, add your pull secret and the mandatory firewall rules. Example Kickstart file snippet for adding the pull secret and firewall rules %post --log=/var/log/anaconda/post-install.log --erroronfail # Add the pull secret to CRI-O and set root user-only read/write permissions cat > /etc/crio/openshift-pull-secret << EOF YOUR_OPENSHIFT_PULL_SECRET_HERE EOF chmod 600 /etc/crio/openshift-pull-secret # Configure the firewall with the mandatory rules for MicroShift firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 %end Install the mkksiso tool by running the following command: USD sudo yum install -y lorax Update the ISO with your new Kickstart file by running the following command: USD sudo mkksiso <your_kickstart>.ks <your_installer>.iso <updated_installer>.iso Additional resources Composing, installing, and managing RHEL for Edge images System requirements for installing MicroShift Red Hat Hybrid Cloud Console pull secret Required firewall settings Creating Kickstart files A.1. Kickstart file format How to embed a Kickstart file into an ISO image 1.10. How to access the MicroShift cluster Use the procedures in this section to access the MicroShift cluster by using the OpenShift CLI ( oc ). You can access the cluster from either the same machine running the MicroShift service or from a remote location. You can use this access to observe and administrate workloads. When using the following steps, choose the kubeconfig file that contains the host name or IP address you want to connect to and place it in the relevant directory. 1.10.1. Accessing the MicroShift cluster locally Use the following procedure to access the MicroShift cluster locally by using a kubeconfig file. Prerequisites You have installed the oc binary. Procedure Optional: to create a ~/.kube/ folder if your Red Hat Enterprise Linux (RHEL) machine does not have one, run the following command: USD mkdir -p ~/.kube/ Copy the generated local access kubeconfig file to the ~/.kube/ directory by running the following command: USD sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config Update the permissions on your ~/.kube/config file by running the following command: USD chmod go-r ~/.kube/config Verification Verify that MicroShift is running by entering the following command: USD oc get all -A 1.10.2. Opening the firewall for remote access to the MicroShift cluster Use the following procedure to open the firewall so that a remote user can access the MicroShift cluster. This procedure must be completed before a workstation user can access the cluster remotely. For this procedure, user@microshift is the user on the MicroShift host machine and is responsible for setting up that machine so that it can be accessed by a remote user on a separate workstation. Prerequisites You have installed the oc binary. Your account has cluster administration privileges. Procedure As user@microshift on the MicroShift host, open the firewall port for the Kubernetes API server ( 6443/tcp ) by running the following command: [user@microshift]USD sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload Verification As user@microshift , verify that MicroShift is running by entering the following command: [user@microshift]USD oc get all -A 1.10.3. Accessing the MicroShift cluster remotely Use the following procedure to access the MicroShift cluster from a remote location by using a kubeconfig file. The user@workstation login is used to access the host machine remotely. The <user> value in the procedure is the name of the user that user@workstation logs in with to the MicroShift host. Prerequisites You have installed the oc binary. The user@microshift has opened the firewall from the local host. Procedure As user@workstation , create a ~/.kube/ folder if your Red Hat Enterprise Linux (RHEL) machine does not have one by running the following command: [user@workstation]USD mkdir -p ~/.kube/ As user@workstation , set a variable for the hostname of your MicroShift host by running the following command: [user@workstation]USD MICROSHIFT_MACHINE=<name or IP address of MicroShift machine> As user@workstation , copy the generated kubeconfig file that contains the host name or IP address you want to connect with from the RHEL machine running MicroShift to your local machine by running the following command: [user@workstation]USD ssh <user>@USDMICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/USDMICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config Note To generate the kubeconfig files for this step, see Generating additional kubeconfig files for remote access . As user@workstation , update the permissions on your ~/.kube/config file by running the following command: USD chmod go-r ~/.kube/config Verification As user@workstation , verify that MicroShift is running by entering the following command: [user@workstation]USD oc get all -A Additional resources Generating additional kubeconfig files for remote access
|
[
"sudo mkdir -p /etc/osbuild-composer/repositories",
"sudo cp /usr/share/osbuild-composer/repositories/rhel-9.4.json /etc/osbuild-composer/repositories/rhel-9.4.json",
"\"baseurl\": \"https://cdn.redhat.com/content/eus/rhel<9>/<9.4>//baseos/os\", 1",
"sudo sed -i \"s,dist/rhel<9>/<9.4>/USD(uname -m)/baseos/,eus/rhel<9>/<9.4>/USD(uname -m)/baseos/,g\" /etc/osbuild-composer/repositories/rhel-<9.4>.json 1",
"\"baseurl\": \"https://cdn.redhat.com/content/eus/rhel<9>/<9.4>//appstream/os\", 1",
"sudo sed -i \"s,dist/rhel<9>/<9.4>/USD(uname -m)/appstream/,eus/rhel<9>/<9.4>/USD(uname -m)/appstream/,g\" /etc/osbuild-composer/repositories/rhel-<9.4>.json 1",
"sudo composer-cli sources info baseos | grep 'url ='",
"url = \"https://cdn.redhat.com/content/eus/rhel9/9.4/x86_64/baseos/os\"",
"sudo composer-cli sources info appstream | grep 'url ='",
"url = \"https://cdn.redhat.com/content/eus/rhel9/9.4/x86_64/appstream/os\"",
"cat > rhocp-4.18.toml <<EOF id = \"rhocp-4.18\" name = \"Red Hat OpenShift Container Platform 4.18 for RHEL 9\" type = \"yum-baseurl\" url = \"https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/rhocp/4.18/os\" check_gpg = true check_ssl = true system = false rhsm = true EOF",
"cat > fast-datapath.toml <<EOF id = \"fast-datapath\" name = \"Fast Datapath for RHEL 9\" type = \"yum-baseurl\" url = \"https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/fast-datapath/os\" check_gpg = true check_ssl = true system = false rhsm = true EOF",
"sudo composer-cli sources add rhocp-4.18.toml",
"sudo composer-cli sources add fast-datapath.toml",
"sudo composer-cli sources list",
"appstream baseos fast-datapath rhocp-4.18",
"cat > <microshift_blueprint.toml> <<EOF 1 name = \" <microshift_blueprint> \" 2 description = \"\" version = \"0.0.1\" modules = [] groups = [] [[packages]] name = \"microshift\" version = \"4.18.1\" 3 [customizations.services] enabled = [\"microshift\"] EOF",
"name = \"microshift_blueprint\" description = \"MicroShift 4.17.1 on x86_64 platform\" version = \"0.0.1\" modules = [] groups = [] [[packages]] 1 name = \"microshift\" version = \"4.17.1\" [customizations.services] 2 enabled = [\"microshift\"] [customizations.firewall] ports = [\"22:tcp\", \"80:tcp\", \"443:tcp\", \"5353:udp\", \"6443:tcp\", \"30000-32767:tcp\", \"30000-32767:udp\"] [[containers]] 3 source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4\" [[containers]] source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd\" ... EOF",
"sudo composer-cli blueprints push <microshift_blueprint.toml> 1",
"sudo composer-cli blueprints depsolve <microshift_blueprint> | grep microshift 1",
"blueprint: microshift_blueprint v0.0.1 microshift-greenboot-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-networking-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-release-info-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-selinux-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch",
"sudo composer-cli blueprints depsolve <microshift_blueprint> 1",
"vi <microshift_blueprint.toml> 1",
"[[packages]] 1 name = \" <microshift-additional-package-name> \" 2 version = \"*\"",
"[[customizations.directories]] path = \"/etc/pki/ca-trust/source/anchors\"",
"[[customizations.files]] path = \"/etc/pki/ca-trust/source/anchors/cert1.pem\" data = \"<value>\"",
"sudo update-ca-trust",
"%post Update certificate trust storage in case new certificates were installed at /etc/pki/ca-trust/source/anchors directory update-ca-trust %end",
"BUILDID=USD(sudo composer-cli compose start-ostree --ref \"rhel/{op-system-version-major}/USD(uname -m)/edge\" <microshift_blueprint> edge-container | awk '/^Compose/ {print USD2}') 1",
"sudo composer-cli compose status",
"ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 microshift_blueprint 0.0.1 edge-container",
"ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 microshift_blueprint 0.0.1 edge-container",
"sudo composer-cli compose image USD{BUILDID}",
"sudo chown USD(whoami). USD{BUILDID}-container.tar",
"sudo chmod a+r USD{BUILDID}-container.tar",
"IMAGEID=USD(cat < \"./USD{BUILDID}-container.tar\" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*')",
"sudo podman run -d --name=minimal-microshift-server -p 8085:8080 USD{IMAGEID}",
"cat > microshift-installer.toml <<EOF name = \"microshift-installer\" description = \"\" version = \"0.0.0\" modules = [] groups = [] packages = [] EOF",
"sudo composer-cli blueprints push microshift-installer.toml",
"BUILDID=USD(sudo composer-cli compose start-ostree --url http://localhost:8085/repo/ --ref \"rhel/9/USD(uname -m)/edge\" microshift-installer edge-installer | awk '{print USD2}')",
"sudo composer-cli compose status",
"ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d RUNNING Wed Jun 7 13:22:20 2023 microshift-installer 0.0.0 edge-installer",
"ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d FINISHED Wed Jun 7 13:34:49 2023 microshift-installer 0.0.0 edge-installer",
"sudo composer-cli compose image USD{BUILDID}",
"sudo chown USD(whoami). USD{BUILDID}-installer.iso",
"sudo chmod a+r USD{BUILDID}-installer.iso",
"Partition disk such that it contains an LVM volume group called `rhel` with a 10GB+ system root but leaving free space for the LVMS CSI driver for storing data. # For example, a 20GB disk would be partitioned in the following way: # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda1 8:1 0 800M 0 part /boot └─sda2 8:2 0 19G 0 part └─rhel-root 253:0 0 10G 0 lvm /sysroot # ostreesetup --nogpg --osname=rhel --remote=edge --url=file:///run/install/repo/ostree/repo --ref=rhel/<RHEL VERSION NUMBER>/x86_64/edge zerombr clearpart --all --initlabel part /boot/efi --fstype=efi --size=200 part /boot --fstype=xfs --asprimary --size=800 Uncomment this line to add a SWAP partition of the recommended size #part swap --fstype=swap --recommended part pv.01 --grow volgroup rhel pv.01 logvol / --vgname=rhel --fstype=xfs --size=10000 --name=root To add users, use a line such as the following user --name=<YOUR_USER_NAME> --password=<YOUR_HASHED_PASSWORD> --iscrypted --groups=<YOUR_USER_GROUPS>",
"%post --log=/var/log/anaconda/post-install.log --erroronfail Add the pull secret to CRI-O and set root user-only read/write permissions cat > /etc/crio/openshift-pull-secret << EOF YOUR_OPENSHIFT_PULL_SECRET_HERE EOF chmod 600 /etc/crio/openshift-pull-secret Configure the firewall with the mandatory rules for MicroShift firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 %end",
"sudo yum install -y lorax",
"sudo mkksiso <your_kickstart>.ks <your_installer>.iso <updated_installer>.iso",
"mkdir -p ~/.kube/",
"sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config",
"chmod go-r ~/.kube/config",
"oc get all -A",
"[user@microshift]USD sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload",
"[user@microshift]USD oc get all -A",
"[user@workstation]USD mkdir -p ~/.kube/",
"[user@workstation]USD MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>",
"[user@workstation]USD ssh <user>@USDMICROSHIFT_MACHINE \"sudo cat /var/lib/microshift/resources/kubeadmin/USDMICROSHIFT_MACHINE/kubeconfig\" > ~/.kube/config",
"chmod go-r ~/.kube/config",
"[user@workstation]USD oc get all -A"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/embedding_in_a_rhel_for_edge_image/microshift-embed-in-rpm-ostree
|
Configuration Guide
|
Configuration Guide Red Hat JBoss Enterprise Application Platform 7.4 Instructions for setting up and maintaining Red Hat JBoss Enterprise Application Platform, including running applications and services. Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/index
|
Preface
|
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preface-ocs-osp
|
Chapter 60. TelemetryService
|
Chapter 60. TelemetryService 60.1. GetConfig GET /v1/telemetry/config 60.1.1. Description 60.1.2. Parameters 60.1.3. Return Type CentralTelemetryConfig 60.1.4. Content Type application/json 60.1.5. Responses Table 60.1. HTTP Response Codes Code Message Datatype 200 A successful response. CentralTelemetryConfig 0 An unexpected error response. RuntimeError 60.1.6. Samples 60.1.7. Common object reference 60.1.7.1. CentralTelemetryConfig Field Name Required Nullable Type Description Format userId String endpoint String storageKeyV1 String 60.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 60.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 60.1.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 60.2. GetTelemetryConfiguration GET /v1/telemetry/configure 60.2.1. Description 60.2.2. Parameters 60.2.3. Return Type StorageTelemetryConfiguration 60.2.4. Content Type application/json 60.2.5. Responses Table 60.2. HTTP Response Codes Code Message Datatype 200 A successful response. StorageTelemetryConfiguration 0 An unexpected error response. RuntimeError 60.2.6. Samples 60.2.7. Common object reference 60.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 60.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 60.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 60.2.7.3. StorageTelemetryConfiguration Field Name Required Nullable Type Description Format enabled Boolean lastSetTime Date date-time 60.3. ConfigureTelemetry PUT /v1/telemetry/configure 60.3.1. Description 60.3.2. Parameters 60.3.2.1. Body Parameter Name Description Required Default Pattern body V1ConfigureTelemetryRequest X 60.3.3. Return Type StorageTelemetryConfiguration 60.3.4. Content Type application/json 60.3.5. Responses Table 60.3. HTTP Response Codes Code Message Datatype 200 A successful response. StorageTelemetryConfiguration 0 An unexpected error response. RuntimeError 60.3.6. Samples 60.3.7. Common object reference 60.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 60.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 60.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 60.3.7.3. StorageTelemetryConfiguration Field Name Required Nullable Type Description Format enabled Boolean lastSetTime Date date-time 60.3.7.4. V1ConfigureTelemetryRequest Field Name Required Nullable Type Description Format enabled Boolean
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/telemetryservice
|
Chapter 1. Overview of AMQ Streams
|
Chapter 1. Overview of AMQ Streams Red Hat AMQ Streams is a massively-scalable, distributed, and high-performance data streaming platform based on the Apache ZooKeeper and Apache Kafka projects. The main components comprise: Kafka Broker Messaging broker responsible for delivering records from producing clients to consuming clients. Apache ZooKeeper is a core dependency for Kafka, providing a cluster coordination service for highly reliable distributed coordination. Kafka Streams API API for writing stream processor applications. Producer and Consumer APIs Java-based APIs for producing and consuming messages to and from Kafka brokers. Kafka Bridge AMQ Streams Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. Kafka Connect A toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka MirrorMaker Replicates data between two Kafka clusters, within or across data centers. Kafka Exporter An exporter used in the extraction of Kafka metrics data for monitoring. A cluster of Kafka brokers is the hub connecting all these components. The broker uses Apache ZooKeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache ZooKeeper cluster has to be ready. Figure 1.1. AMQ Streams architecture 1.1. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.2. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.3. Supported Configurations In order to be running in a supported configuration, AMQ Streams must be running in one of the following JVM versions and on one of the supported operating systems. Table 1.1. List of supported Java Virtual Machines Java Virtual Machine Version OpenJDK 1.8, 11 OracleJDK 1.8, 11 IBM JDK 1.8 Table 1.2. List of supported Operating Systems Operating System Architecture Version Red Hat Enterprise Linux x86_64 7.x, 8.x 1.4. Document conventions Replaceables In this document, replaceable text is styled in monospace , with italics, uppercase, and hyphens. For example, in the following code, you will want to replace BOOTSTRAP-ADDRESS and TOPIC-NAME with your own address and topic name: bin/kafka-console-consumer.sh --bootstrap-server BOOTSTRAP-ADDRESS --topic TOPIC-NAME --from-beginning
|
[
"bin/kafka-console-consumer.sh --bootstrap-server BOOTSTRAP-ADDRESS --topic TOPIC-NAME --from-beginning"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/overview-str
|
Chapter 3. Configuring audit logging in Elytron
|
Chapter 3. Configuring audit logging in Elytron You can use Elytron to complete security audits on triggering events. Security auditing refers to triggering events, such as writing to a log, in response to an authorization or authentication attempt. The type of security audit performed on events depends on your security realm configuration. 3.1. Elytron audit logging After you enable audit logging with the elytron subsystem, you can log Elytron authentication and authorization events within the application server. Elytron stores audit log entries in either JSON or SIMPLE formats. Use SIMPLE for human readable text format or JSON for storing individual events in JSON . Elytron audit logging differs from other types of audit logging, such as audit logging for the JBoss EAP management interfaces. Elytron disables audit logging by default, however, you can enable audit logging by configuring any of the following log handlers. You can also add the log handler to a security domain. File audit logging For more information, see Enabling file audit logging in Elytron . Periodic rotating file audit logging For more information, see Enabling periodic rotating file audit logging in Elytron . Size rotating file audit logging For more information, see Enabling size rotating file audit logging in Elytron . syslog audit logging For more information, see Enabling syslog audit logging in Elytron . Custom audit logging For more information, see Using custom security event listeners in Elytron . You can use the aggregate-security-event-listener resource to send security events to more destinations, such as loggers. The aggregate-security-event-listener resource delivers all events to all listeners specified in the aggregate listener definition. 3.2. Enabling file audit logging in Elytron File audit logging stores audit log messages in a single file within your file system. By default, Elytron specifies local-audit as the file audit logger. You must enable local-audit so that it can write Elytron audit logs to EAP_HOME/standalone/log/audit.log on a standalone server or EAP_HOME/domain/log/audit.log for a managed domain. Prerequisites You have secured an application. For more information, see Creating an aggregate-realm in Elytron . Procedure Create a file audit log. Syntax Example Add the file audit log to a security domain. Syntax Example Verification In a browser, log in to your secured application. For example, to log in to the application created in Using a security domain to authenticate and authorize application users , navigate to http://localhost:8080/simple-webapp-example/secured and log in. Navigate to the directory configured to store the audit log. If you use the example commands in the procedure, the directory is EAP_HOME /standalone/log. Note that a file called file-audit.log is created. It contains the logs of the events triggered by logging in to the application. Example file-audit.log file Additional resources file-audit-log attributes 3.3. Enabling periodic rotating file audit logging in Elytron You can use the elytron subsystem to enable periodic rotating file audit logging for your standalone server or a server running as a managed domain. Periodic rotating file audit logging automatically rotates audit log files based on your configured schedule. Periodic rotating file audit logging is similar to default file audit logging, but periodic rotating file audit logging contains an additional attribute: suffix . The value of the suffix attribute is a date specified using the java.time.format.DateTimeFormatter format, such as .yyyy-MM-dd . Elytron automatically calculates the period of the rotation from the value provided with the suffix. The elytron subsystem appends the suffix to the end of a log file name. Prerequisites You have secured an application. For more information, see Creating an aggregate-realm in Elytron . Procedure Create a periodic rotating file audit log. Syntax Example Add the periodic rotating file audit logger to a security domain. Syntax Example Verification In a browser, log in to your secured application. For example, to log in to the application created in Using a security domain to authenticate and authorize application users , navigate to http://localhost:8080/simple-webapp-example/secured and log in. Navigate to the directory configured to store the audit log. If you use the example commands in the procedure, the directory is EAP_HOME /standalone/log. Note that a file called periodic-file-audit.log is created. It contains the logs of the events triggered by logging in to the application. Example periodic-file-audit.log file Additional resources periodic-rotating-file-audit-log attributes 3.4. Enabling size rotating file audit logging in Elytron You can use the elytron subsystem to enable size rotating file audit logging for your standalone server or a server running as a managed domain. Size rotating file audit logging automatically rotates audit log files when the log file reaches a configured file size. Size rotating file audit logging is similar to default file audit logging, but the size rotating file audit logging contains additional attributes. When the log file size exceeds the limit defined by the rotate-size attribute, Elytron appends the suffix .1 to the end of the current file andcreates a new log file. For each existing log file, Elytron increments the suffix by one. For example, Elytron renames audit_log.1 to audit_log.2 . Elytron continues the increments until the log file amount reaches the maximum number of log files, as defined by max-backup-index . When a log file exceeds the max-backup-index value, Elytron removes the file. For example, if the max-backup-index defines "98" as the max-backup-index value, the audit_log.99 file would be over the limit. Prerequisites You have secured an application. For more information, see Creating an aggregate-realm in Elytron . Procedure Create a size rotating file audit log. Syntax Example Add the size rotating audit logger to a security domain. Syntax Example Verification In a browser, log in to your secured application. For example, to log in to the application created in Using a security domain to authenticate and authorize application users , navigate to http://localhost:8080/simple-webapp-example/secured and log in. Navigate to the directory configured to store the audit log. If you use the example commands in the procedure, the directory is EAP_HOME /standalone/log. Note that a file called size-file-audit.log is created. It contains the logs of the events triggered by logging in to the application. Example size-file-audit.log file Additional resources size-rotating-file-audit-log attributes 3.5. Enabling syslog audit logging in Elytron You can use the elytron subsystem to enable syslog audit logging for your standalone server or a server running as a managed domain. When you use syslog audit logging, you send the logging results to a syslog server, which provides more security options than logging to a local file. The syslog handler specifies parameters used to connect to a syslog server, such as the syslog server's host name and the port on which the syslog server listens. You can define multiple syslog handlers and activate them simultaneously. Supported log formats include RFC5424 and RFC3164 . Supported transmission protocols include UDP, TCP, and TCP with SSL. When you define a syslog for the first instance, the logger sends an INFORMATIONAL priority event containing the message to the syslog server, as demonstrated in the following example: <format> refers to the Request for Comments (RFC) format configured for the audit logging handler, which defaults to RFC5424 . Prerequisites You have secured an application. For more information, see Creating an aggregate-realm in Elytron . Procedure Add a syslog handler. Syntax You can also send logs to a syslog server over TLS: Syntax for syslog configuration to send logs over TLS Add the syslog audit logger to a security domain. Syntax Example Additional resources syslog-audit-log attributes Using a client-ssl-context The Syslog Protocol The BSD syslog Protocol 3.6. Using custom security event listeners in Elytron You can use Elytron to define a custom event listener. A custom event listener processes incoming security events. You can use the event listener for custom audit logging purposes, or you can use the event listener to authenticate users against your internal identity storage. Important The ability to add and remove modules by using the module management CLI command is provided as a Technology Preview feature only. The module command is not appropriate for use in a managed domain or when connecting with a remote management CLI. You must manually add or remove modules in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Prerequisites You have secured an application. For more information, see Creating an aggregate-realm in Elytron . Procedure Create a class that implements the java.util.function.Consumer<org.wildfly.security.auth.server.event.SecurityEvent> interface. Example of creating a Java class that uses the specified interface: The Java class in the example prints a message whenever a user succeeds or fails authentication. Add the JAR file that provides the custom event listener as a module to JBoss EAP. The following is an example of the management CLI command that adds a custom event listener as a module to Elytron. Example of using the module command to add a custom event listener as a module to Elytron: Reference the custom event listener in the security domain. Example of referencing a custom event listener in ApplicationDomain : Restart the server. The event listener receives security events from the specified security domain. Additional resources Create a Custom Module Manually Remove a Custom Module Manually Add a Custom Component to Elytron
|
[
"/subsystem=elytron/file-audit-log= <audit_log_name> :add(path=\" <path_to_log_file> \",format= <format_type> ,synchronized= <whether_to_log_immediately> )",
"/subsystem=elytron/file-audit-log=exampleFileAuditLog:add(path=\"file-audit.log\",relative-to=jboss.server.log.dir,format=SIMPLE,synchronized=true)",
"/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=security-event-listener,value= <audit_log_name> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:write-attribute(name=security-event-listener,value=exampleFileAuditLog)",
"2023-10-24 23:31:04,WARNING,{event=SecurityPermissionCheckSuccessfulEvent,event-time=2023-10-24 23:31:04,security-identity=[name=user1,creation-time=2023-10-24 23:31:04],success=true,permission=[type=org.wildfly.security.auth.permission.LoginPermission,actions=,name=]} 2023-10-24 23:31:04,WARNING,{event=SecurityAuthenticationSuccessfulEvent,event-time=2023-10-24 23:31:04,security-identity=[name=user1,creation-time=2023-10-24 23:31:04],success=true}",
"/subsystem=elytron/periodic-rotating-file-audit-log= <periodic_audit_log_name> :add(path=\" <periodic_audit_log_filename> \",format= <record_format> ,synchronized= <whether_to_log_immediately> ,suffix=\" <suffix_in_DateTimeFormatter_format> \")",
"/subsystem=elytron/periodic-rotating-file-audit-log=examplePreiodicFileAuditLog:add(path=\"periodic-file-audit.log\",relative-to=jboss.server.log.dir,format=SIMPLE,synchronized=true,suffix=\"yyyy-MM-dd\")",
"/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=security-event-listener,value= <periodic_audit_log_name> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:write-attribute(name=security-event-listener,value=examplePreiodicFileAuditLog)",
"2023-10-24 23:31:04,WARNING,{event=SecurityPermissionCheckSuccessfulEvent,event-time=2023-10-24 23:31:04,security-identity=[name=user1,creation-time=2023-10-24 23:31:04],success=true,permission=[type=org.wildfly.security.auth.permission.LoginPermission,actions=,name=]} 2023-10-24 23:31:04,WARNING,{event=SecurityAuthenticationSuccessfulEvent,event-time=2023-10-24 23:31:04,security-identity=[name=user1,creation-time=2023-10-24 23:31:04],success=true}",
"/subsystem=elytron/size-rotating-file-audit-log= <audit_log_name> :add(path=\"<path_to_log_file>\",format= <record_format> ,synchronized= <whether_to_log_immediately> ,rotate-size=\" <max_file_size_before_rotation> \",max-backup-index= <max_number_of_backup_files> )",
"/subsystem=elytron/size-rotating-file-audit-log=exampleSizeFileAuditLog:add(path=\"size-file-audit.log\",relative-to=jboss.server.log.dir,format=SIMPLE,synchronized=true,rotate-size=\"10m\",max-backup-index=10)",
"/subsystem=elytron/security-domain= <domain_size_logger> :write-attribute(name=security-event-listener,value= <audit_log_name> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:write-attribute(name=security-event-listener,value=exampleSizeFileAuditLog)",
"2023-10-24 23:31:04,WARNING,{event=SecurityPermissionCheckSuccessfulEvent,event-time=2023-10-24 23:31:04,security-identity=[name=user1,creation-time=2023-10-24 23:31:04],success=true,permission=[type=org.wildfly.security.auth.permission.LoginPermission,actions=,name=]} 2023-10-24 23:31:04,WARNING,{event=SecurityAuthenticationSuccessfulEvent,event-time=2023-10-24 23:31:04,security-identity=[name=user1,creation-time=2023-10-24 23:31:04],success=true}",
"\"Elytron audit logging enabled with RFC format: <format>\"",
"/subsystem=elytron/syslog-audit-log= <syslog_audit_log_name> :add(host-name= <record_host_name> ,port= <syslog_server_port_number> ,server-address= <syslog_server_address> ,format= <record_format> , transport= <transport_layer_protocol> )",
"/subsystem=elytron/syslog-audit-log= <syslog_audit_log_name> :add(transport=SSL_TCP,server-address= <syslog_server_address> ,port= <syslog_server_port_number> ,host-name= <record_host_name> ,ssl-context= <client_ssl_context> )",
"/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=security-event-listener,value= <syslog_audit_log_name> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:write-attribute(name=security-event-listener,value=exampleSyslog)",
"public class MySecurityEventListener implements Consumer<SecurityEvent> { public void accept(SecurityEvent securityEvent) { if (securityEvent instanceof SecurityAuthenticationSuccessfulEvent) { System.err.printf(\"Authenticated user \\\"%s\\\"\\n\", securityEvent.getSecurityIdentity().getPrincipal()); } else if (securityEvent instanceof SecurityAuthenticationFailedEvent) { System.err.printf(\"Failed authentication as user \\\"%s\\\"\\n\", ((SecurityAuthenticationFailedEvent)securityEvent).getPrincipal()); } } }",
"/subsystem=elytron/custom-security-event-listener= <listener_name> :add(module= <module_name> , class-name= <class_name> )",
"/subsystem=elytron/security-domain= <domain_name> :write-attribute(name=security-event-listener, value= <listener_name> )",
"reload"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_multiple_identity_stores/configuring-audit-logging-in-elytron_default
|
9.2. Scalability Considerations
|
9.2. Scalability Considerations Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 10.1, "JBoss Data Virtualization Settings" ), this section provides some additional information about those settings related to scalability. buffer-service-processor-batch-size Default is 256. This property specifies the maximum row count of a batch sent internally within the query processor. Additional considerations are needed if extremely large VM sizes and or datasets are being used. JBoss Data Virtualization has a non-negligible amount of overhead per batch/table page on the order of 100-200 bytes. Depending on the data types involved, each full batch/table page will represent a variable number of rows (a power of two multiple above or below the processor batch size). If you are working with extremely large datasets and you run into memory issues, consider increasing the buffer-service-processor-batch-size property to force the allocation of larger batches and table pages. buffer-service-max-storage-object-size Default is 8288608 or 8MB. This value is the maximum size of a buffered managed object in bytes and represents the individual batch page size. If buffer-service-processor-batch-size is increased or you are dealing with extremely wide result sets, then the default setting of 8MB for the buffer-service-max-storage-object-size may be too low. The inline LOBS also contribute to this size if the batch contains them. The sizing for buffer-service-max-storage-object-size is in terms of serialized size, which will be much closer to the raw data size than the Java memory footprint estimation used for buffer-service-max-reserve-kb . buffer-service-max-storage-object-size should not be set too large relative to buffer-service-memory-buffer-space since it will reduce the performance of the memory buffer. The memory buffer supports only 1 concurrent writer for each buffer-service-max-storage-object-size of the buffer-service-memory-buffer-space . Note JBoss Data Virtualization temporary tables (also used for internal materialization) can only support 2^31-1 rows per table. buffer-service-memory-buffer-space Default is -1. This controls the amount of on or off heap memory allocated as byte buffers for use by the JBoss Data Virtualization buffer manager. This setting defaults to -1, which automatically determines a setting based upon whether it is on or off heap and the value for buffer-service-max-reserve-kb . Note When left at the default setting the calculated memory buffer space will be approximately one quarter of the buffer-service-max-reserve-kb setting. If the memory buffer is off heap and the buffer-service-max-reserve-kb setting is automatically calculated, the memory buffer space will be subtracted out of the effective buffer-service-max-reserve-kb . buffer-service-memory-buffer-off-heap Default is false. Determines whether to take advantage of the buffer manager memory buffer to access system memory without allocating it to the heap. Setting buffer-service-memory-buffer-off-heap to true will allocate the JBoss Data Virtualization memory buffer off heap. Depending on whether your installation is dedicated to JBoss Data Virtualization and the amount of system memory available, this may be preferred over on-heap allocation. The primary benefit of off-heap memory is additional memory usage for JBoss Data Virtualization without additional garbage collection tuning. This becomes especially important in situations where more than 32GB of memory is desired for the JVM. Note that when using off-heap allocation, the memory must still be available to the java process and that setting the value of buffer-service-memory-buffer-space too high may cause the JVM to swap rather than reside in memory. With large off-heap buffer sizes (greater than several gigabytes) you may also need to adjust JVM settings. For Sun JVMs the relevant JVM settings are MaxDirectMemorySize and UseLargePages . For example adding: to the JVM process arguments would allow for an effective allocation of an (approximately) 11GB JBoss Data Virtualization memory buffer (the buffer-service-memory-buffer-space property) accounting for any additional direct memory that may be needed by JBoss EAP or applications running within it.
|
[
"-XX:MaxDirectMemorySize=12g -XX:+UseLargePages"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/scalability_considerations
|
Chapter 36. StatefulSetTemplate schema reference
|
Chapter 36. StatefulSetTemplate schema reference Used in: KafkaClusterTemplate , ZookeeperClusterTemplate Property Property type Description metadata MetadataTemplate Metadata applied to the resource. podManagementPolicy string (one of [OrderedReady, Parallel]) PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady . Defaults to Parallel .
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-statefulsettemplate-reference
|
Red Hat Data Grid
|
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_java_client_guide/red-hat-data-grid
|
Part V. Using the Showcase application for case management
|
Part V. Using the Showcase application for case management As a case worker or process administrator, you can use the Showcase application to manage and monitor case management applications while case work is carried out in Business Central. Case management differs from business process management (BPM) in that it focuses on the actual data being handled throughout the case and less on the sequence of steps taken to complete a goal. Case data is the most important piece of information in case handling, while business context and decision-making is in the hands of the human case worker. Use this document to install the Showcase application and start a case instance using the IT_Orders sample case management project in Business Central. Use Business Central to complete the tasks required to complete an IT Orders case. Prerequisites Red Hat JBoss Enterprise Application Platform 7.4 is installed. For installation information, see Red Hat JBoss Enterprise Application Platform 7.4 Installation Guide . Red Hat Process Automation Manager is installed on Red Hat JBoss EAP and configured with KIE Server. For more information see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . KieLoginModule is configured in standalone-full.xml . This is required to connect to KIE Server. For more information about configuring KIE Server, see Planning a Red Hat Process Automation Manager installation . Red Hat Process Automation Manager is running and you can log in to Business Central with a user that has both kie-server and user roles. For more information about roles, see Planning a Red Hat Process Automation Manager installation . The IT_Orders sample project has been imported in Business Central and deployed to KIE Server. For more information about case management, see Getting started with case management .
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/assembly-showcase-application
|
Chapter 1. Introduction to RHEL System Roles
|
Chapter 1. Introduction to RHEL System Roles By using RHEL System Roles, you can remotely manage the system configurations of multiple RHEL systems across major versions of RHEL. RHEL System Roles is a collection of Ansible roles and modules. To use it to configure systems, you must use the following components: Control node A control node is the system from which you run Ansible commands and playbooks. Your control node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL 9, 8, or 7 host. For more information, see Preparing a control node on RHEL 8 . Managed node Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more information, see Preparing a managed node . Ansible playbook In a playbook, you define the configuration you want to achieve on your managed nodes or a set of steps for the system on the managed node to perform. Playbooks are Ansible's configuration, deployment, and orchestration language. Inventory In an inventory file, you list the managed nodes and specify information such as IP address for each managed node. In an inventory, you can also organize managed nodes, creating and nesting groups for easier scaling. An inventory file is also sometimes called a hostfile. On Red Hat Enterprise Linux 8, you can use the following roles provided by the rhel-system-roles package, which is available in the AppStream repository: Role name Role description Chapter title certificate Certificate Issuance and Renewal Requesting certificates using RHEL System Roles cockpit Web console Installing and configuring web console with the cockpit RHEL System Role crypto_policies System-wide cryptographic policies Setting a custom cryptographic policy across systems firewall Firewalld Configuring firewalld using System Roles ha_cluster HA Cluster Configuring a high-availability cluster using System Roles kdump Kernel Dumps Configuring kdump using RHEL System Roles kernel_settings Kernel Settings Using Ansible roles to permanently configure kernel parameters logging Logging Using the logging System Role metrics Metrics (PCP) Monitoring performance using RHEL System Roles microsoft.sql.server Microsoft SQL Server Configuring Microsoft SQL Server using the microsoft.sql.server Ansible role network Networking Using the network RHEL System Role to manage InfiniBand connections nbde_client Network Bound Disk Encryption client Using the nbde_client and nbde_server System Roles nbde_server Network Bound Disk Encryption server Using the nbde_client and nbde_server System Roles postfix Postfix Variables of the postfix role in System Roles selinux SELinux Configuring SELinux using System Roles ssh SSH client Configuring secure communication with the ssh System Roles sshd SSH server Configuring secure communication with the ssh System Roles storage Storage Managing local storage using RHEL System Roles tlog Terminal Session Recording Configuring a system for session recording using the tlog RHEL System Role timesync Time Synchronization Configuring time synchronization using RHEL System Roles vpn VPN Configuring VPN connections with IPsec by using the vpn RHEL System Role Additional resources Red Hat Enterprise Linux (RHEL) System Roles /usr/share/doc/rhel-system-roles/ provided by the rhel-system-roles package
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/intro-to-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
|
Chapter 2. Preparing Ceph Storage nodes for overcloud deployment
|
Chapter 2. Preparing Ceph Storage nodes for overcloud deployment All nodes in this scenario are bare metal systems that use IPMI for power management. These nodes do not require an operating system because director copies a Red Hat Enterprise Linux 8 image to each node. Additionally, the Ceph Storage services on these nodes are containerized. Director communicates to each node through the Provisioning network during the introspection and provisioning processes. All nodes connect to this network through the native VLAN. 2.1. Cleaning Ceph Storage node disks The Ceph Storage OSDs and journal partitions require GPT disk labels. This means the additional disks on Ceph Storage require conversion to GPT before installing the Ceph OSD services. You must delete all metadata from the disks to allow the director to set GPT labels on them. You can configure director to delete all disk metadata by default. With this option, the Bare Metal Provisioning service runs an additional step to boot the nodes and clean the disks each time the node is set to available . This process adds an additional power cycle after the first introspection and before each deployment. The Bare Metal Provisioning service uses the wipefs --force --all command to perform the clean. Procedure Add the following setting to your /home/stack/undercloud.conf file: After you set this option, run the openstack undercloud install command to execute this configuration change. Warning The wipefs --force --all command deletes all data and metadata on the disk, but does not perform a secure erase. A secure erase takes much longer. 2.2. Registering nodes Procedure Import a node inventory file ( instackenv.json ) in JSON format to director so that director can communicate with the nodes. This inventory file contains hardware and power management details that director can use to register nodes: After you create the inventory file, save the file to the home directory of the stack user ( /home/stack/instackenv.json ). Initialize the stack user, then import the instackenv.json inventory file into director: The openstack overcloud node import command imports the inventory file and registers each node with director. Assign the kernel and ramdisk images to each node: The nodes are registered and configured in director. 2.3. Verifying available Red Hat Ceph Storage packages To help avoid overcloud deployment failures, verify that the required packages exist on your servers. 2.3.1. Verifying the ceph-ansible package version The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen. Procedure Verify that the ceph-ansible package version you want is installed: 2.3.2. Verifying packages for pre-provisioned nodes Red Hat Ceph Storage (RHCS) can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages. For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes . Procedure Verify that the pre-provisioned nodes contain the required packages: 2.4. Manually tagging nodes into profiles After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles. Procedure Trigger hardware introspection to retrieve the hardware attributes of each node: The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state. The --provide option resets all nodes to an active state after introspection. Important Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes. Retrieve a list of your nodes to identify their UUIDs: Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile. Note As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data. For example, a typical deployment contains three profiles: control , compute , and ceph-storage . Enter the following commands to tag three nodes for each profile: Tip You can also configure a new custom profile to tag a node for the Ceph MON and Ceph MDS services, see Chapter 3, Deploying Ceph services on dedicated nodes . 2.5. Defining the root disk for multi-disk clusters Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, director writes the overcloud image to the root disk during the provisioning process. There are several properties that you can define to help director identify the root disk: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1. Important Use the name property only for devices with persistent names. Do not use name to set the root disk for any other devices because this value can change when the node boots. You can specify the root device using its serial number. Procedure Check the disk information from the hardware introspection of each node. Run the following command to display the disk information of a node: For example, the data for one node might show three disks: Enter openstack baremetal node set --property root_device= to set the root disk for a node. Include the most appropriate hardware attribute value to define the root disk. For example, to set the root device to disk 2, which has the serial number 61866da04f380d001ea4e13c12e36ad6 , enter the following command: Note Ensure that you configure the BIOS of each node to include booting from the root disk that you choose. Configure the boot order to boot from the network first, then to boot from the root disk. Director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, director provisions and writes the overcloud image to the root disk. 2.6. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement By default, director writes the QCOW2 overcloud-full image to the root disk during the provisioning process. The overcloud-full image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal image, for example, to provision a bare OS where you do not want to run any other OpenStack services and consume your subscription entitlements. A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this and similar use cases, you can use the overcloud-minimal image option to avoid reaching the limit of your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal image, see Obtaining images for overcloud nodes . Note A Red Hat OpenStack Platform (RHOSP) subscription contains Open vSwitch (OVS), but core services, such as OVS, are not available when you use the overcloud-minimal image. OVS is not required to deploy Ceph Storage nodes. Use linux_bond instead of ovs_bond to define bonds. For more information about linux_bond , see Linux bonding options . Procedure To configure director to use the overcloud-minimal image, create an environment file that contains the following image definition: Replace <roleName> with the name of the role and append Image to the name of the role. The following example shows an overcloud-minimal image for Ceph storage nodes: In the roles_data.yaml role definition file, set the rhsm_enforce parameter to False . Pass the environment file to the openstack overcloud deploy command. Note The overcloud-minimal image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement.
|
[
"clean_nodes=true",
"{ \"nodes\":[ { \"mac\":[ \"b1:b1:b1:b1:b1:b1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"b2:b2:b2:b2:b2:b2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"b3:b3:b3:b3:b3:b3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"c1:c1:c1:c1:c1:c1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" }, { \"mac\":[ \"c2:c2:c2:c2:c2:c2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" }, { \"mac\":[ \"c3:c3:c3:c3:c3:c3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" }, { \"mac\":[ \"d1:d1:d1:d1:d1:d1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.211\" }, { \"mac\":[ \"d2:d2:d2:d2:d2:d2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.212\" }, { \"mac\":[ \"d3:d3:d3:d3:d3:d3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.213\" } ] }",
"source ~/stackrc openstack overcloud node import ~/instackenv.json",
"openstack overcloud node configure <node>",
"ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml",
"ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml",
"openstack overcloud node introspect --all-manageable --provide",
"openstack baremetal node list",
"openstack baremetal node set --property capabilities=' profile:control ,boot_option:local' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 openstack baremetal node set --property capabilities=' profile:control ,boot_option:local' 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a openstack baremetal node set --property capabilities=' profile:control ,boot_option:local' 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a openstack baremetal node set --property capabilities=' profile:compute ,boot_option:local' 484587b2-b3b3-40d5-925b-a26a2fa3036f openstack baremetal node set --property capabilities=' profile:compute ,boot_option:local' d010460b-38f2-4800-9cc4-d69f0d067efe openstack baremetal node set --property capabilities=' profile:compute ,boot_option:local' d930e613-3e14-44b9-8240-4f3559801ea6 openstack baremetal node set --property capabilities=' profile:ceph-storage ,boot_option:local' 484587b2-b3b3-40d5-925b-a26a2fa3036f openstack baremetal node set --property capabilities=' profile:ceph-storage ,boot_option:local' d010460b-38f2-4800-9cc4-d69f0d067efe openstack baremetal node set --property capabilities=' profile:ceph-storage ,boot_option:local' d930e613-3e14-44b9-8240-4f3559801ea6",
"(undercloud)USD openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq \".inventory.disks\"",
"[ { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sda\", \"wwn_vendor_extension\": \"0x1ea4dcc412a9632b\", \"wwn_with_extension\": \"0x61866da04f3807001ea4dcc412a9632b\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380700\", \"serial\": \"61866da04f3807001ea4dcc412a9632b\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdb\", \"wwn_vendor_extension\": \"0x1ea4e13c12e36ad6\", \"wwn_with_extension\": \"0x61866da04f380d001ea4e13c12e36ad6\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380d00\", \"serial\": \"61866da04f380d001ea4e13c12e36ad6\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdc\", \"wwn_vendor_extension\": \"0x1ea4e31e121cfb45\", \"wwn_with_extension\": \"0x61866da04f37fc001ea4e31e121cfb45\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f37fc00\", \"serial\": \"61866da04f37fc001ea4e31e121cfb45\" } ]",
"(undercloud)USD openstack baremetal node set --property root_device='{\"serial\":\"<serial_number>\"}' <node-uuid>",
"(undercloud)USD openstack baremetal node set --property root_device='{\"serial\": \"61866da04f380d001ea4e13c12e36ad6\"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0",
"parameter_defaults: <roleName>Image: overcloud-minimal",
"parameter_defaults: CephStorageImage: overcloud-minimal",
"rhsm_enforce: False"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_an_overcloud_with_containerized_red_hat_ceph/creation
|
1.2.5. Viewing Changes
|
1.2.5. Viewing Changes Viewing the Status To determine the current status of a working copy, change to the directory with the working copy and run the following command: svn status This displays information about all changes to the working copy ( A for a file that is scheduled for addition, D for a file that is scheduled for removal, M for a file that contains local changes, C for a file with unresolved conflicts, ? for a file that is not under revision control). Example 1.10. Viewing the status of a working copy Imagine that the directory with your working copy of a Subversion repository has the following contents: With the exception of ChangeLog , which is scheduled for addition to the Subversion repository, all files and directories within this directory are already under revision control. The TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. The LICENSE file has been renamed to COPYING , and Makefile contains local changes. To display the status of such a working copy, type: Viewing Differences To view differences between a working copy and the checked out content, change to the directory with the working copy and run the following command: svn diff [ file ... ] This displays changes to all files in the working copy. If you are only interested in changes to a particular file, supply the file name on the command line. Example 1.11. Viewing changes to a working copy Imagine that the directory with your working copy of a Subversion repository has the following contents: All files in this directory are under revision control and Makefile contains local changes. To view these changes, type:
|
[
"project]USD ls AUTHORS ChangeLog COPYING doc INSTALL Makefile README src",
"project]USD svn status D LICENSE D TODO A ChangeLog A + COPYING M Makefile",
"project]USD ls AUTHORS ChangeLog COPYING CVS doc INSTALL Makefile README src",
"project]USD svn diff Makefile Index: Makefile =================================================================== --- Makefile (revision 1) +++ Makefile (working copy) @@ -153,7 +153,7 @@ -rmdir USD(man1dir) clean: - -rm -f USD(MAN1) + -rm -f USD(MAN1) USD(MAN7) %.1: %.pl USD(POD2MAN) --section=1 --release=\"Version USD(VERSION)\" \\"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-revision_control_systems-svn-view
|
20.7. Managing the Directory Manager Password
|
20.7. Managing the Directory Manager Password The Directory Manager is the privileged database administrator, comparable to the root user in Linux. The Directory Manager entry and the corresponding password are set during the instance installation. The default distinguished name (DN) of the Directory Manager is cn=Directory Manager . 20.7.1. Resetting the Directory Manager Password If you lose the Directory Manager password, reset it: Stop the Directory Server instance: Generate a new password hash. For example: Specifying the path to the Directory Server configuration automatically uses the password storage scheme set in the nsslapd-rootpwstoragescheme attribute to encrypt the new password. Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file and set the nsslapd-rootpw attribute to the value displayed in the step: Start the Directory Server instance: 20.7.2. Changing the Directory Manager Password This section describes how to can change the password of the Directory Manager account. 20.7.2.1. Changing the Directory Manager Password Using the Command Line Use one of the following options to set the new password: Important Only set the password using an encrypted connection. Using an unencrypted connection can expose the password to the network. If your server does not support encrypted connections, use the web console to update the Directory Manager password. See Section 20.7.2.2, "Changing the Directory Manager Password Using the Web Console" . To set the nsslapd-rootpw parameter to a plain text value which Directory Server automatically encrypts: Warning Do not use curly braces ( {} ) in the password. Directory Server stores the password in the {password-storage-scheme}hashed_password format. The server interprets characters in curly braces as the password storage scheme. If the string is an invalid storage scheme or if the password is not correctly hashed, the Directory Manager cannot connect to the server. To manually encrypt the password and setting it in the nsslapd-rootpw parameter: Generate a new password hash. For example: Specifying the path to the Directory Server configuration automatically uses the password storage scheme set in the nsslapd-rootpwstoragescheme attribute to encrypt the new password. Set the nsslapd-rootpw attribute to the value displayed in the step using a secure connection (STARTTLS): 20.7.2.2. Changing the Directory Manager Password Using the Web Console As the administrator, perform these steps to change the password: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select Server Settings . Open the Directory Manager tab. Enter the new password into the Directory Manager Password and Confirm Password fields Optionally, set a different password storage scheme. Click Save . 20.7.3. Changing the Directory Manager Password Storage Scheme The password storage scheme specifies which algorithm Directory Server uses to hash a password. To change the storage scheme using the command line, your server must support encrypted connections. If your server does not support encrypted connections, use the web console to set the storage scheme. See Section 20.7.3.2, "Changing the Directory Manager Password Storage Scheme Using the Web Console" . Note that the storage scheme of the Directory Manager ( nsslapd-rootpwstoragescheme ) can be different than the scheme used to encrypt user passwords ( nsslapd-pwstoragescheme ). For a list of supported password storage schemes, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . Note If you change the Directory Manager's password storage scheme you must also reset its password. Existing passwords cannot be re-encrypted. 20.7.3.1. Changing the Directory Manager Password Storage Scheme Using the Command Line If your server supports encrypted connections, perform these steps to change the password storage scheme: Generate a new password hash that uses the new storage scheme. For example: Set the nsslapd-rootpwstoragescheme attribute to the storage scheme and the nsslapd-rootpw attribute to the value displayed in the step using a secure connection (STARTTLS): 20.7.3.2. Changing the Directory Manager Password Storage Scheme Using the Web Console Perform these steps to change the password using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select Server Settings . Open the Directory Manager tab. Set the password storage scheme. Directory Server cannot re-encrypt the current password using the new storage scheme. Therefore, enter a new password into the Directory Manager Password and Confirm Password field. Click Save Configuration . 20.7.4. Changing the Directory Manager DN As the administrator, perform the following step to change the Directory Manager DN to cn=New Directory Manager : Note that Directory Server supports only changing the Directory Manager DNs using the command line.
|
[
"dsctl instance_name stop",
"pwdhash -D /etc/dirsrv/slapd- instance_name password {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"nsslapd-rootpw: {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"dsctl instance_name start",
"dsconf -D \"cn=Directory Manager\" ldaps://server.example.com config replace nsslapd-rootpw= password",
"pwdhash -D /etc/dirsrv/slapd- instance_name password {PBKDF2_SHA256}AAAgAMwPYIhEkQozTagoX6RGG5E7d6/6oOJ8TVty",
"dsconf -D \"cn=Directory Manager\" ldaps://server.example.com config replace nsslapd-rootpw=\" {PBKDF2_SHA256}AAAgAMwPYIhEkQozTagoX6RGG5E7d6/6oOJ8TVty... \"",
"pwdhash -s PBKDF2_SHA256 password {PBKDF2_SHA256}AAAgAMwPYIhEkQozTagoX6RGG5E7d6/6oOJ8TVty",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-rootpwstoragescheme= PBKDF2_SHA256 nsslapd-rootpw=\" {PBKDF2_SHA256}AAAgAMwPYIhEkQozTagoX6RGG5E7d6/6oOJ8TVty... \"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-rootdn=\" cn=New Directory Manager \""
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/dirmnger-pwd
|
Appendix A. Inventory file variables
|
Appendix A. Inventory file variables The following tables contain information about the pre-defined variables used in Ansible installation inventory files. Not all of these variables are required. A.1. General variables Variable Description enable_insights_collection The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to False to disable. Default = true nginx_user_http_config List of nginx configurations for /etc/nginx/nginx.conf under the http section. Each element in the list is provided into http nginx config as a separate line. Default = empty list registry_password registry_password is only required if a non-bundle installer is used. Password credential for access to registry_url . Used for both [automationcontroller] and [automationhub] groups. Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry. When registry_url is registry.redhat.io , username and password are required if not using a bundle installer. registry_url Used for both [automationcontroller] and [automationhub] groups. Default = registry.redhat.io . registry_username registry_username is only required if a non-bundle installer is used. User credential for access to registry_url . Used for both [automationcontroller] and [automationhub] groups, but only if the value of registry_url is registry.redhat.io . Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry. routable_hostname routable hostname is used if the machine running the installer can only route to the target host through a specific URL, for example, if you use shortnames in your inventory, but the node running the installer can only resolve that host using FQDN. If routable_hostname is not set, it should default to ansible_host . If you do not set ansible_host , inventory_hostname is used as a last resort. This variable is used as a host variable for particular hosts and not under the [all:vars] section. For further information, see Assigning a variable to one machine:host variables . A.2. Ansible automation hub variables Variable Description automationhub_admin_password Required Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. automationhub_api_token If upgrading from Ansible Automation Platform 2.0 or earlier, you must either: provide an existing Ansible automation hub token as automationhub_api_token , or set generate_automationhub_token to true to generate a new token Generating a new token invalidates the existing token. automationhub_authentication_backend This variable is not set by default. Set it to ldap to use LDAP authentication. When this is set to ldap , you must also set the following variables: automationhub_ldap_server_uri automationhub_ldap_bind_dn automationhub_ldap_bind_password automationhub_ldap_user_search_base_dn automationhub_ldap_group_search_base_dn If any of these are absent, the installation will be halted. automationhub_auto_sign_collections If a collection signing service is enabled, collections are not signed automatically by default. Setting this parameter to true signs them by default. Default = false . automationhub_backup_collections Optional Ansible automation hub provides artifacts in /var/lib/pulp . Automation controller automatically backs up the artifacts by default. You can also set automationhub_backup_collections to false and the backup/restore process does not then backup or restore /var/lib/pulp . Default = true . automationhub_collection_download_count Optional Determines whether download count is displayed on the UI. Default = false . automationhub_collection_seed_repository When you run the bundle installer, validated content is uploaded to the validated repository, and certified content is uploaded to the rh-certified repository. By default, both certified and validated content are uploaded. Possible values of this variable are 'certified' or 'validated'. If you do not want to install content, set automationhub_seed_collections to false to disable the seeding. If you only want one type of content, set automationhub_seed_collections to true and automationhub_collection_seed_repository to the type of content you do want to include. automationhub_collection_signing_service_key If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed. /absolute/path/to/key/to/sign automationhub_collection_signing_service_script If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed. /absolute/path/to/script/that/signs automationhub_create_default_collection_signing_service Set this variable to true to create a collection signing service. Default = false . automationhub_container_signing_service_key If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed. /absolute/path/to/key/to/sign automationhub_container_signing_service_script If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed. /absolute/path/to/script/that/signs automationhub_create_default_container_signing_service Set this variable to true to create a container signing service. Default = false . automationhub_disable_hsts The default installation deploys a TLS enabled Ansible automation hub. Use this variable if you deploy automation hub with HTTP Strict Transport Security (HSTS) web-security policy enabled. This variable disables, the HSTS web-security policy mechanism. Default = false . automationhub_disable_https Optional If Ansible automation hub is deployed with HTTPS enabled. Default = false . automationhub_enable_api_access_log When set to true , this variable creates a log file at /var/log/galaxy_api_access.log that logs all user actions made to the platform, including their username and IP address. Default = false . automationhub_enable_analytics A Boolean indicating whether to enable pulp analytics for the version of pulpcore used in automation hub in Ansible Automation Platform 2.4. To enable pulp analytics, set automationhub_enable_analytics to true. Default = false . automationhub_enable_unauthenticated_collection_access Set this variable to true to enable unauthorized users to view collections. Default = false . automationhub_enable_unauthenticated_collection_download Set this variable to true to enable unauthorized users to download collections. Default = false . automationhub_importer_settings Optional Dictionary of setting to pass to galaxy-importer. At import time collections can go through a series of checks. Behavior is driven by galaxy-importer.cfg configuration. Examples are ansible-doc , ansible-lint , and flake8 . This parameter enables you to drive this configuration. automationhub_main_url The main automation hub URL that clients connect to. For example, https://<load balancer host>. Use automationhub_main_url to specify the main automation hub URL that clients connect to if you are implementing Red Hat Single Sign-On on your automation hub environment. If not specified, the first node in the [automationhub] group is used. automationhub_pg_database Required The database name. Default = automationhub . automationhub_pg_host Required if not using an internal database. The hostname of the remote PostgreSQL database used by automation hub. Default = 127.0.0.1 . automationhub_pg_password The password for the automation hub PostgreSQL database. Use of special characters for automationhub_pg_password is limited. The ! , # , 0 and @ characters are supported. Use of other special characters can cause the setup to fail. automationhub_pg_port Required if not using an internal database. Default = 5432. automationhub_pg_sslmode Required. Default = prefer . automationhub_pg_username Required Default = automationhub . automationhub_require_content_approval Optional Value is true if automation hub enforces the approval mechanism before collections are made available. By default when you upload collections to automation hub an administrator must approve it before they are made available to the users. If you want to disable the content approval flow, set the variable to false . Default = true . automationhub_seed_collections A Boolean that defines whether or not preloading is enabled. When you run the bundle installer, validated content is uploaded to the validated repository, and certified content is uploaded to the rh-certified repository. By default, both certified and validated content are uploaded. If you do not want to install content, set automationhub_seed_collections to false to disable the seeding. If you only want one type of content, set automationhub_seed_collections to true and automationhub_collection_seed_repository to the type of content you do want to include. Default = true . automationhub_ssl_cert Optional /path/to/automationhub.cert Same as web_server_ssl_cert but for automation hub UI and API. automationhub_ssl_key Optional /path/to/automationhub.key . Same as web_server_ssl_key but for automation hub UI and API automationhub_ssl_validate_certs For Red Hat Ansible Automation Platform 2.2 and later, this value is no longer used. Set value to true if automation hub must validate certificates when requesting itself because by default, Ansible Automation Platform deploys with self-signed certificates. Default = false . automationhub_upgrade Deprecated For Ansible Automation Platform 2.2.1 and later, the value of this has been fixed at true . Automation hub always updates with the latest packages. automationhub_user_headers List of nginx headers for Ansible automation hub's web server. Each element in the list is provided to the web server's nginx configuration as a separate line. Default = empty list ee_from_hub_only When deployed with automation hub the installer pushes execution environment images to automation hub and configures automation controller to pull images from the automation hub registry. To make automation hub the only registry to pull execution environment images from, set this variable to true . If set to false , execution environment images are also taken directly from Red Hat. Default = true when the bundle installer is used. generate_automationhub_token If upgrading from Red Hat Ansible Automation Platform 2.0 or earlier, choose one of the following options: provide an existing Ansible automation hub token as automationhub_api_token set generate_automationhub_token to true to generate a new token. Generating a new token will invalidate the existing token. nginx_hsts_max_age This variable specifies how long, in seconds, the system should be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication. Default = 63072000 seconds, or two years. nginx_tls_protocols Defines support for ssl_protocols in Nginx. Values available TLSv1 , TLSv1.1, `TLSv1.2 , TLSv1.3 The TLSv1.1 and TLSv1.2 parameters only work when OpenSSL 1.0.1 or higher is used. The TLSv1.3 parameter only works when OpenSSL 1.1.1 or higher is used. If nginx_tls-protocols = ['TLSv1.3'] only TLSv1.3 is enabled. To set more than one protocol use nginx_tls_protocols = ['TLSv1.2', 'TLSv.1.3'] Default = TLSv1.2 . pulp_db_fields_key Relative or absolute path to the Fernet symmetric encryption key that you want to import. The path is on the Ansible management node. It is used to encrypt certain fields in the database, such as credentials. If not specified, a new key will be generated. sso_automation_platform_login_theme Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Path to the directory where theme files are located. If changing this variable, you must provide your own theme files. Default = ansible-automation-platform . sso_automation_platform_realm Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. The name of the realm in SSO. Default = ansible-automation-platform . sso_automation_platform_realm_displayname Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Display name for the realm. Default = Ansible Automation Platform . sso_console_admin_username Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. SSO administration username. Default = admin . sso_console_admin_password Required Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. SSO administration password. sso_custom_keystore_file Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Customer-provided keystore for SSO. sso_host Required Used for Ansible Automation Platform externally managed Red Hat Single Sign-On only. Automation hub requires SSO and SSO administration credentials for authentication. If SSO is not provided in the inventory for configuration, then you must use this variable to define the SSO host. sso_keystore_file_remote Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Set to true if the customer-provided keystore is on a remote node. Default = false . sso_keystore_name Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Name of keystore for SSO. Default = ansible-automation-platform . sso_keystore_password Password for keystore for HTTPS enabled SSO. Required when using Ansible Automation Platform managed SSO and when HTTPS is enabled. The default install deploys SSO with sso_use_https=true . sso_redirect_host Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. If sso_redirect_host is set, it is used by the application to connect to SSO for authentication. This must be reachable from client machines. sso_ssl_validate_certs Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Set to true if the certificate must be validated during connection. Default = true . sso_use_https Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On if Single Sign On uses HTTPS. Default = true . For Ansible automation hub to connect to LDAP directly, you must configure the following variables: A list of additional LDAP related variables that can be passed using the ldap_extra_settings variable, see the Django reference documentation . Variable Description automationhub_ldap_bind_dn The name to use when binding to the LDAP server with automationhub_ldap_bind_password . Must be set when integrating private automation hub with LDAP, or the installation will fail. automationhub_ldap_bind_password Required The password to use with automationhub_ldap_bind_dn . Must be set when integrating private automation hub LDAP, or the installation will fail. automationhub_ldap_group_search_base_dn An LDAP Search object that finds all LDAP groups that users might belong to. If your configuration makes any references to LDAP groups, you must set this variable and automationhub_ldap_group_type . Must be set when integrating private automation hub with LDAP, or the installation will fail. Default = None automationhub_ldap_group_search_filter Optional Search filter for finding group membership. Variable identifies what objectClass type to use for mapping groups with automation hub and LDAP. Used for installing automation hub with LDAP. Default = (objectClass=Group) automationhub_ldap_group_search_scope Optional Scope to search for groups in an LDAP tree using the django framework for LDAP authentication. Used for installing automation hub with LDAP. Default = SUBTREE automationhub_ldap_group_type Describes the type of group returned by automationhub_ldap_group_search . This is set dynamically based on the the values of automationhub_ldap_group_type_params and automationhub_ldap_group_type_class , otherwise it is the default value coming from django-ldap which is 'None' Default = django_auth_ldap.config:GroupOfNamesType automationhub_ldap_group_type_class Optional The importable path for the django-ldap group type class. Variable identifies the group type used during group searches within the django framework for LDAP authentication. Used for installing automation hub with LDAP. Default = django_auth_ldap.config:GroupOfNamesType automationhub_ldap_server_uri The URI of the LDAP server. Use any URI that is supported by your underlying LDAP libraries. Must be set when integrating private automation hub LDAP, or the installation will fail. automationhub_ldap_user_search_base_dn An LDAP Search object that locates a user in the directory. The filter parameter must contain the placeholder %(user)s for the username. It must return exactly one result for authentication to succeed. Must be set when integrating private automation hub with LDAP, or the installation will fail. automationhub_ldap_user_search_filter Optional Default = '(uid=%(user)s)' automationhub_ldap_user_search_scope Optional Scope to search for users in an LDAP tree by using the django framework for LDAP authentication. Used for installing automation hub with LDAP. Default = SUBTREE A.3. Automation controller variables Variable Description admin_password The admin password used to connect to the automation controller instance. Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. automation_controller_main_url The full URL used by Event-Driven Ansible to connect to a controller host. This URL is required if there is no automation controller configured in the inventory file. Format example: automation_controller_main_url='https://<hostname>' admin_username The username used to identify and create the admin superuser in automation controller. admin_email The email address used for the admin user for automation controller. nginx_http_port The nginx HTTP server listens for inbound connections. Default = 80 nginx_https_port The nginx HTTPS server listens for secure connections. Default = 443 nginx_hsts_max_age This variable specifies how long, in seconds, the system must be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication. Default = 63072000 seconds, or two years. nginx_tls_protocols Defines support for ssl_protocols in Nginx. Values available TLSv1 , TLSv1.1, `TLSv1.2 , TLSv1.3 The TLSv1.1 and TLSv1.2 parameters only work when OpenSSL 1.0.1 or higher is used. The TLSv1.3 parameter only works when OpenSSL 1.1.1 or higher is used. If nginx_tls-protocols = ['TLSv1.3'] only TLSv1.3 is enabled. To set more than one protocol use nginx_tls_protocols = ['TLSv1.2', 'TLSv.1.3'] Default = TLSv1.2 . nginx_user_headers List of nginx headers for the automation controller web server. Each element in the list is provided to the web server's nginx configuration as a separate line. Default = empty list node_state Optional The status of a node or group of nodes. Valid options are active , deprovision to remove a node from a cluster, or iso_migrate to migrate a legacy isolated node to an execution node. Default = active . node_type For [automationcontroller] group. Two valid node_types can be assigned for this group. A node_type=control means that the node only runs project and inventory updates, but not regular jobs. A node_type=hybrid can run everything. Default for this group = hybrid For [execution_nodes] group: Two valid node_types can be assigned for this group. A node_type=hop implies that the node forwards jobs to an execution node. A node_type=execution implies that the node can run jobs. Default for this group = execution . peers Optional The peers variable is used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established. This variable is used to add tcp-peer entries in the receptor.conf file used for establishing network connections with other nodes. The peers variable can be a comma-separated list of hosts and groups from the inventory. This is resolved into a set of hosts that is used to construct the receptor.conf file. pg_database The name of the postgreSQL database. Default = awx . pg_host The postgreSQL host, which can be an externally managed database. pg_password The password for the postgreSQL database. Use of special characters for pg_password is limited. The ! , # , 0 and @ characters are supported. Use of other special characters can cause the setup to fail. NOTE You no longer have to provide a pg_hashed_password in your inventory file at the time of installation because PostgreSQL 13 can now store user passwords more securely. When you supply pg_password in the inventory file for the installer, PostgreSQL uses the SCRAM-SHA-256 hash to secure that password as part of the installation process. pg_port The postgreSQL port to use. Default = 5432 pg_ssl_mode Choose one of the two available modes: prefer and verify-full . Set to verify-full for client-side enforced SSL. Default = prefer . pg_username Your postgreSQL database username. Default = awx . postgres_ssl_cert Location of the postgreSQL SSL certificate. /path/to/pgsql_ssl.cert postgres_ssl_key Location of the postgreSQL SSL key. /path/to/pgsql_ssl.key postgres_use_cert Location of the postgreSQL user certificate. /path/to/pgsql.crt postgres_use_key Location of the postgreSQL user key. /path/to/pgsql.key postgres_use_ssl Use this variable if postgreSQL uses SSL. postgres_max_connections Maximum database connections setting to apply if you are using installer-managed postgreSQL. See PostgreSQL database configuration in the automation controller administration guide for help selecting a value. Default for VM-based installations = 200 for a single node and 1024 for a cluster. receptor_listener_port Port to use for receptor connection. Default = 27199 receptor_listener_protocol Protocol to connect to a receptor. Default = tcp receptor_datadir This variable configures the receptor data directory. By default, it is set to /tmp/receptor . To change the default location, run the installation script with "-e receptor_datadir=" and specify the target directory that you want. NOTES * The target directory must be accessible to awx users. * If the target directory is a temporary file system tmpfs , ensure it is remounted correctly after a reboot. Failure to do so results in the receptor no longer having a working directory. web_server_ssl_cert Optional /path/to/webserver.cert Same as automationhub_ssl_cert but for web server UI and API. web_server_ssl_key Optional /path/to/webserver.key Same as automationhub_server_ssl_key but for web server UI and API. A.4. Ansible variables The following variables control how Ansible Automation Platform interacts with remote hosts. For more information about variables specific to certain plugins, see the documentation for Ansible.Builtin . For a list of global configuration options, see Ansible Configuration Settings . Variable Description ansible_connection The connection plugin used for the task on the target host. This can be the name of any of Ansible connection plugin. SSH protocol types are smart , ssh or paramiko . Default = smart ansible_host The ip or name of the target host to use instead of inventory_hostname . ansible_port The connection port number. Default: 22 for ssh ansible_user The user name to use when connecting to the host. ansible_password The password to authenticate to the host. Never store this variable in plain text. Always use a vault. ansible_ssh_private_key_file Private key file used by SSH. Useful if using multiple keys and you do not want to use an SSH agent. ansible_ssh_common_args This setting is always appended to the default command line for sftp , scp , and ssh . Useful to configure a ProxyCommand for a certain host or group. ansible_sftp_extra_args This setting is always appended to the default sftp command line. ansible_scp_extra_args This setting is always appended to the default scp command line. ansible_ssh_extra_args This setting is always appended to the default ssh command line. ansible_ssh_pipelining Determines if SSH pipelining is used. This can override the pipelining setting in ansible.cfg . If using SSH key-based authentication, the key must be managed by an SSH agent. ansible_ssh_executable Added in version 2.2. This setting overrides the default behavior to use the system SSH. This can override the ssh_executable setting in ansible.cfg . ansible_shell_type The shell type of the target system. Do not use this setting unless you have set the ansible_shell_executable to a non-Bourne (sh) compatible shell. By default commands are formatted using sh-style syntax. Setting this to csh or fish causes commands executed on target systems to follow the syntax of those shells instead. ansible_shell_executable This sets the shell that the Ansible controller uses on the target machine, and overrides the executable in ansible.cfg which defaults to /bin/sh . Do not change this variable unless /bin/sh is not installed on the target machine or cannot be run from sudo. inventory_hostname This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. A.5. Event-Driven Ansible controller variables Variable Description automationedacontroller_admin_password The admin password used by the Event-Driven Ansible controller instance. Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. automationedacontroller_admin_username Username used by django to identify and create the admin superuser in Event-Driven Ansible controller. Default = admin automationedacontroller_admin_email Email address used by django for the admin user for Event-Driven Ansible controller. Default = [email protected] automationedacontroller_allowed_hostnames List of additional addresses to enable for user access to Event-Driven Ansible controller. Default = empty list automationedacontroller_controller_verify_ssl Boolean flag used to verify automation controller's web certificates when making calls from Event-Driven Ansible controller. Verified is true ; not verified is false . Default = false automationedacontroller_disable_https Boolean flag to disable HTTPS Event-Driven Ansible controller. Default = false automationedacontroller_disable_hsts Boolean flag to disable HSTS Event-Driven Ansible controller. Default = false automationedacontroller_gunicorn_workers Number of workers for the API served through gunicorn. Default = (# of cores or threads) * 2 + 1 automationedacontroller_max_running_activations The number of maximum activations running concurrently per node. This is an integer that must be greater than 0. Default = 12 automationedacontroller_nginx_tls_files_remote Boolean flag to specify whether cert sources are on the remote host (true) or local (false). Default = false automationedacontroller_pg_database The Postgres database used by Event-Driven Ansible controller. Default = automtionedacontroller . automationnedacontroller_pg_host The hostname of the Postgres database used by Event-Driven Ansible controller, which can be an externally managed database. automationedacontroller_pg_password The password for the Postgres database used by Event-Driven Ansible controller. Use of special characters for automationedacontroller_pg_password is limited. The ! , # , 0 and @ characters are supported. Use of other special characters can cause the setup to fail. automationedacontroller_pg_port The port number of the Postgres database used by Event-Driven Ansible controller. Default = 5432 . automationedacontroller_pg_username The username for your Event-Driven Ansible controller Postgres database. Default = automationedacontroller . automationedacontroller_rq_workers Number of Redis Queue (RQ) workers used by Event-Driven Ansible controller. RQ workers are Python processes that run in the background. Default = (# of cores or threads) * 2 + 1 automationedacontroller_ssl_cert Optional /root/ssl_certs/eda. <example> .com.crt Same as automationhub_ssl_cert but for Event-Driven Ansible controller UI and API. automationedacontroller_ssl_key Optional /root/ssl_certs/eda. <example> .com.key Same as automationhub_server_ssl_key but for Event-Driven Ansible controller UI and API. automationedacontroller_user_headers List of additional nginx headers to add to Event-Driven Ansible controller's nginx configuration. Default = empty list
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars
|
Chapter 11. Red Hat Enterprise Linux Atomic Host
|
Chapter 11. Red Hat Enterprise Linux Atomic Host Included in the release of Red Hat Enterprise Linux 7.1 is Red Hat Enterprise Linux Atomic Host - a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. It has been designed to take advantage of the powerful technology available in Red Hat Enterprise Linux 7. Red Hat Enterprise Linux Atomic Host uses SELinux to provide strong safeguards in multi-tenant environments, and provides the ability to perform atomic upgrades and rollbacks, enabling quicker and easier maintenance with less downtime. Red Hat Enterprise Linux Atomic Host uses the same upstream projects delivered via the same RPM packaging as Red Hat Enterprise Linux 7. Red Hat Enterprise Linux Atomic Host is pre-installed with the following tools to support Linux containers: Docker - For more information, see Get Started with Docker Formatted Container Images on Red Hat Systems . Kubernetes , flannel , etcd - For more information, see Get Started Orchestrating Containers with Kubernetes . Red Hat Enterprise Linux Atomic Host makes use of the following technologies: OSTree and rpm-OSTree - These projects provide atomic upgrades and rollback capability. systemd - The powerful new init system for Linux that enables faster boot times and easier orchestration. SELinux - Enabled by default to provide complete multi-tenant security. New features in Red Hat Enterprise Linux Atomic Host 7.1.4 The iptables-service package has been added. It is now possible to enable automatic "command forwarding" when commands that are not found on Red Hat Enterprise Linux Atomic Host, are seamlessly retried inside the RHEL Atomic Tools container. The feature is disabled by default (it requires a RHEL Atomic Tools pulled on the system). To enable it, uncomment the export line in the /etc/sysconfig/atomic file so it looks like this: The atomic command: You can now pass three options ( OPT1 , OPT2 , OPT3 ) to the LABEL command in a Dockerfile. Developers can add environment variables to the labels to allow users to pass additional commands using atomic . The following is an example from a Dockerfile: This line means that running the following command: is identical to running You can now use USD{NAME} and USD{IMAGE} anywhere in your label, and atomic will substitute it with an image and a name. The USD{SUDO_UID} and USD{SUDO_GID} options are set and can be used in image LABEL . The atomic mount command attempts to mount the file system belonging to a given container/image ID or image to the given directory. Optionally, you can provide a registry and tag to use a specific version of an image. New features in Red Hat Enterprise Linux Atomic Host 7.1.3 Enhanced rpm-OSTee to provide a unique machine ID for each machine provisioned. Support for remote-specific GPG keyring has been added, specifically to associate a particular GPG key with a particular OSTree remote. the atomic command: atomic upload - allows the user to upload a container image to a docker repository or to a Pulp/Crane instance. atomic version - displays the "Name Version Release" container label in the following format: ContainerID;Name-Version-Release;Image/Tag atomic verify - inspects an image to verify that the image layers are based on the latest image layers available. For example, if you have a MongoDB application based on rhel7-1.1.2 and a rhel7-1.1.3 base image is available, the command will inform you there is a later image. A dbus interface has been added to verify and version commands. New features in Red Hat Enterprise Linux Atomic Host 7.1.2 The atomic command-line interface is now available for Red Hat Enterprise Linux 7.1 as well as Red Hat Enterprise Linux Atomic Host. Note that the feature set is different on both systems. Only Red Hat Enterprise Linux Atomic Host includes support for OSTree updates. The atomic run command is supported on both platforms. atomic run allows a container to specify its run-time options via the RUN meta-data label. This is used primarily with privileges. atomic install and atomic uninstall allow a container to specify install and uninstall scripts via the INSTALL and UNINSTALL meta-data labels. atomic now supports container upgrade and checking for updated images. The iscsi-initiator-utils package has been added to Red Hat Enterprise Linux Atomic Host. This allows the system to mount iSCSI volumes; Kubernetes has gained a storage plugin to set up iSCSI mounts for containers. You will also find Integrity Measurement Architecture (IMA), audit and libwrap available from systemd . Important Red Hat Enterprise Linux Atomic Host is not managed in the same way as other Red Hat Enterprise Linux 7 variants. Specifically: The Yum package manager is not used to update the system and install or update software packages. For more information, see Installing Applications on Red Hat Enterprise Linux Atomic Host . There are only two directories on the system with write access for storing local system configuration: /etc/ and /var/ . The /usr/ directory is mounted read-only. Other directories are symbolic links to a writable location - for example, the /home/ directory is a symlink to /var/home/ . For more information, see Red Hat Enterprise Linux Atomic Host File System . The default partitioning dedicates most of available space to containers, using direct Logical Volume Management (LVM) instead of the default loopback. For more information, see Getting Started with Red Hat Enterprise Linux Atomic Host . Red Hat Enterprise Linux Atomic Host 7.1.1 provides new versions of Docker and etcd , and maintenance fixes for the atomic command and other components.
|
[
"export TOOLSIMG=rhel7/rhel-tools",
"LABEL docker run USD{OPT1}USD{IMAGE}",
"atomic run --opt1=\"-ti\" image_name",
"docker run -ti image_name"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-atomic_host
|
11.5. Allowing Non-admin Users to Manage User Entries
|
11.5. Allowing Non-admin Users to Manage User Entries By default, only the admin user is allowed to manage user life cycle and disable or enable user accounts. To allow another, non-admin user to do this, create a new role, add the relevant permissions to this role, and assign the non-admin user to the role. By default, IdM includes the following privileges related to managing user accounts: Modify Users and Reset passwords This privilege includes permissions to modify various user attributes. User Administrators This privilege includes permissions to add active users, activate non-active users, remove users, modify user attributes, and other permissions. Stage User Provisioning This privilege includes a permission to add stage users. Stage User Administrator This privilege includes permissions to perform a number of life cycle operations, such as adding stage users or moving users between life cycle states. However, it does not include permissions to move users to the active state. For information on defining roles, permissions, and privileges, see Section 10.4, "Defining Role-Based Access Controls" . Allowing Different Users to Perform Different User Management Operations The different privileges related to managing user accounts can be added to different users. For example, you can separate privileges for employee account entry and activation by: Configuring one user as a stage user administrator , who is allowed to add future employees to IdM as stage users, but not to activate them. Configuring another user as a security administrator , who is allowed to activate the stage users after their employee credentials are verified on the first day of employment. To allow a user to perform certain user management operations, create a new role with the required privilege or privileges, and assign the user to that role. Example 11.1. Allowing a Non-admin User to Add Stage Users This example shows how to create a user who is only allowed to add new stage users, but not to perform any other stage user management operations. Log in as the admin user or another user allowed to manage role-based access control. Create a new custom role to manage adding stage users. Create the System Provisioning role. Add the Stage User Provisioning privilege to the role. This privilege provides the ability to add stage users. Grant a non-admin user the rights to add stage users. If the non-admin user does not yet exist, create a new user. In this example, the user is named stage_user_admin . Assign the stage_user_admin user to the System Provisioning role. To make sure the System Provisioning role is configured correctly, you can use the ipa role-show command to display the role settings. Test adding a new stage user as the stage_user_admin user. Log in as stage_user_admin . Note that if you created stage_user_admin as a new user in one of the steps, IdM will ask you to change the initial password set by admin . To make sure your Kerberos ticket for admin has been replaced with a Kerberos ticket for stage_user_admin , you can use the klist utility. Add a new stage user. Note The error that IdM reports after adding a stage user is expected. The stage_user_admin is only allowed to add stage users, not to display information about them. Therefore, instead of displaying a summary of the newly added stage_user settings, IdM displays the error. The stage_user_admin user is not allowed to display information about stage users. Therefore, an attempt to display information about the new stage_user user while logged in as stage_user_admin fails: To display information about stage_user , you can log in as admin :
|
[
"kinit admin",
"ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\" -------------------------------- Added role \"System Provisioning\" -------------------------------- Role name: System Provisioning Description: Responsible for provisioning stage users",
"ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\" Role name: System Provisioning Description: Responsible for provisioning stage users Privileges: Stage User Provisioning ---------------------------- Number of privileges added 1 ----------------------------",
"ipa user-add stage_user_admin --password First name: first_name Last name: last_name Password: Enter password again to verify:",
"ipa role-add-member \"System Provisioning\" --users=stage_user_admin Role name: System Provisioning Description: Responsible for provisioning stage users Member users: stage_user_admin Privileges: Stage User Provisioning ------------------------- Number of members added 1 -------------------------",
"ipa role-show \"System Provisioning\" -------------- 1 role matched -------------- Role name: System provisioning Description: Responsible for provisioning stage users Member users: stage_user_admin Privileges: Stage User Provisioning ---------------------------- Number of entries returned 1 ----------------------------",
"kinit stage_user_admin Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:",
"klist Ticket cache: KEYRING:persistent:0:krb_ccache_xIlCQDW Default principal: [email protected] Valid starting Expires Service principal 02/25/2016 11:42:20 02/26/2016 11:42:20 krbtgt/EXAMPLE.COM",
"ipa stageuser-add stage_user First name: first_name Last name: last_name ipa: ERROR: stage_user: stage user not found",
"ipa stageuser-show stage_user ipa: ERROR: stage_user: stage user not found",
"kinit admin Password for [email protected]: ipa stageuser-show stage_user User login: stage_user First name: Stage Last name: User"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-users-permissions
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/proc_providing-feedback-on-red-hat-documentation_cloud-content-azure
|
function::task_time_string
|
function::task_time_string Name function::task_time_string - Human readable string of task time usage Synopsis Arguments None Description Returns a human readable string showing the user and system time the current task has used up to now. For example " usr: 0m12.908s, sys: 1m6.851s " .
|
[
"task_time_string:string()"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-time-string
|
Chapter 9. Image configuration resources
|
Chapter 9. Image configuration resources Use the following procedure to configure image registries. 9.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. In case the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. blockedRegistries : Registries for which image pull and push actions are denied. All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default internal image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 9.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries and reboots the nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources paramter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, because they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.19.0+7070803 9.2.1. Adding specific registries You can add a list of registries that are permitted for image pull and push actions by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the allowedRegistries parameter, the container runtime searches only those registries. Registries not in the list are blocked. Warning When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an allowed list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 registrySources : Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 allowedRegistries : Registries to use for image pull and push actions. All other registries are blocked. Note Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, the allowed registries list is used to update the image signature policy in the /host/etc/containers/policy.json file on each node. To check that the registries have been added to the policy file, use the following command on a node: USD cat /host/etc/containers/policy.json The following policy indicates that only images from the example.com, quay.io, and registry.redhat.io registries are permitted for image pulls and pushes: Example 9.1. Example image signature policy file { "default":[ { "type":"reject" } ], "transports":{ "atomic":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker-daemon":{ "":[ { "type":"insecureAcceptAnything" } ] } } } Note If your cluster uses the registrySources.insecureRegistries parameter, ensure that any insecure registries are included in the allowed list. For example: spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000 9.2.2. Blocking specific registries You can block any registry by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the blockedRegistries parameter, the container runtime does not search those registries. All other registries are allowed. Warning To prevent pod failure, do not add the registry.redhat.io and quay.io registries to the blockedRegistries list, as they are required by payload images within your environment. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with a blocked list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 registrySources : Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries that should not be used for image pull and push actions. All other registries are allowed. Note Either the blockedRegistries registry or the allowedRegistries registry can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, changes to the blocked registries appear in the /etc/containers/registries.conf file on each node. To check that the registries have been added to the policy file, use the following command on a node: USD cat /host/etc/containers/registries.conf The following example indicates that images from the untrusted.com registry are prevented for image pulls and pushes: Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "untrusted.com" blocked = true 9.2.3. Allowing insecure registries You can add insecure registries by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Registries that do not use valid SSL certificates or do not require HTTPS connections are considered insecure. Warning Insecure external registries should be avoided to reduce possible security risks. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an insecure registries list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 registrySources : Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify an insecure registry. 3 Ensure that any insecure registries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries, then drains and uncordons the nodes when it detects changes. After the nodes return to the Ready state, changes to the insecure and blocked registries appear in the /etc/containers/registries.conf file on each node. To check that the registries have been added to the policy file, use the following command on a node: USD cat /host/etc/containers/registries.conf The following example indicates that images from the insecure.com registry is insecure and is allowed for image pulls and pushes. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "insecure.com" insecure = true 9.2.4. Adding registries that allow image short names You can add registries to search for an image short name by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. An image short name enables you to search for images without including the fully qualified domain name in the pull spec. For example, you could use rhel7/etcd instead of registry.access.redhat.com/rhe7/etcd . You might use short names in situations where using the full path is not practical. For example, if your cluster references multiple internal registries whose DNS changes frequently, you would need to update the fully qualified domain names in your pull specs with each change. In this case, using an image short name might be beneficial. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the containerRuntimeSearchRegistries parameter, when pulling an image with a short name, the container runtime searches those registries. Warning Using image short names with public registries is strongly discouraged. You should use image short names with only internal or private registries. If you list public registries under the containerRuntimeSearchRegistries parameter, you expose your credentials to all the registries on the list and you risk network and registry attacks. You should always use fully-qualified image names with public registries. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, if the containerRuntimeSearchRegistries parameter is added, the MCO creates a file in the /etc/containers/registries.conf.d directory on each node with the listed registries. The file overrides the default list of unqualified search registries in the /host/etc/containers/registries.conf file. There is no way to fall back to the default list of unqualified search registries. The containerRuntimeSearchRegistries parameter works only with the Podman and CRI-O container engines. The registries in the list can be used only in pod specs, not in builds and image streams. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 ... status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Specify registries to use with image short names. You should use image short names with only internal or private registries to reduce possible security risks. 2 Ensure that any registries listed under containerRuntimeSearchRegistries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use this parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. To check that the registries have been added, when a node returns to the Ready state, use the following command on the node: USD cat /host/etc/containers/registries.conf.d/01-image-searchRegistries.conf Example output unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io'] 9.2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the base64-encoded certificate is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 9.2.6. Configuring image registry repository mirroring Setting up container registry repository mirroring enables you to do the following: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. The attributes of repository mirroring in OpenShift Container Platform include: Image pulls are resilient to registry downtimes. Clusters in restricted networks can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a restricted network. After OpenShift Container Platform installation: Even if you don't configure mirroring during OpenShift Container Platform installation, you can do so later using the ImageContentSourcePolicy object. The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy object that identifies: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. Note You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy object. You cannot add a pull secret to a project. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source directory to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi8/ubi-minimal image from registry.access.redhat.com . After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create an ImageContentSourcePolicy file (for example, registryrepomirror.yaml ), replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 source: registry.access.redhat.com/ubi8/ubi-minimal 2 - mirrors: - example.com/example/ubi-minimal source: registry.access.redhat.com/ubi8/ubi-minimal - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 3 1 Indicates the name of the image registry and repository. 2 Indicates the registry and repository containing the content that is mirrored. 3 You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the ImageContentSourcePolicy resource is applied to all repositories from the registry. Create the new ImageContentSourcePolicy object: USD oc create -f registryrepomirror.yaml After the ImageContentSourcePolicy object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository. To check that the mirrored configuration settings, are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.20.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.20.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.20.0 ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.20.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.20.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.20.0 You can see that scheduling on each worker node is disabled as the change is being applied. Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Access the node's files: sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] location = "registry.access.redhat.com/ubi8/" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "example.io/example/ubi8-minimal" insecure = false [[registry.mirror]] location = "example.com/example/ubi8-minimal" insecure = false Pull an image digest to the node from the source and check if it is resolved by the mirror. ImageContentSourcePolicy objects support image digests only, not image tags. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. Additional resources For more information about global pull secrets, see Updating the global cluster pull secret .
|
[
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.19.0+7070803",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/policy.json",
"{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }",
"spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/registries.conf.d/01-image-searchRegistries.conf",
"unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 source: registry.access.redhat.com/ubi8/ubi-minimal 2 - mirrors: - example.com/example/ubi-minimal source: registry.access.redhat.com/ubi8/ubi-minimal - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 3",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.20.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.20.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.20.0 ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.20.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.20.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.20.0",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = \"registry.access.redhat.com/ubi8/\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"example.io/example/ubi8-minimal\" insecure = false [[registry.mirror]] location = \"example.com/example/ubi8-minimal\" insecure = false",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/image-configuration
|
Chapter 6. Troubleshooting
|
Chapter 6. Troubleshooting 6.1. About Troubleshooting Amazon EC2 EC2 provides an Alarm Status for each instance, indicating severe instance malfunction but the absence of such an alarm is no guarantee that the instance has started correctly and services are running properly. It is possible to use Amazon CloudWatch with its custom metric functionality to monitor instance services' health but use of an enterprise management solution is recommended. 6.2. Diagnostic Information In case of a problem being detected by the JBoss Operations Network, Amazon CloudWatch or manual inspection, common sources of diagnostic information are: /var/log also contains all the logs collected from machine startup, JBoss EAP, httpd and most other services. JBoss EAP log files can be found in /opt/rh/eap7/root/usr/share/wildfly/ . Access to these files is only available using an SSH session. See Getting Started with Amazon EC2 Linux Instances for more information about how to configure and establish an SSH session with an Amazon EC2 instance.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/deploying_jboss_eap_on_amazon_web_services/troubleshooting
|
Chapter 2. Troubleshooting a cluster
|
Chapter 2. Troubleshooting a cluster To begin troubleshooting a MicroShift cluster, first access the cluster status. 2.1. Checking the status of a cluster You can check the status of a MicroShift cluster or see active pods. Given in the following procedure are three different commands you can use to check cluster status. You can choose to run one, two, or all commands to help you get the information you need to troubleshoot the cluster. Procedure Check the system status, which returns the cluster status, by running the following command: USD sudo systemctl status microshift If MicroShift fails to start, this command returns the logs from the run. Example healthy output ● microshift.service - MicroShift Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; preset: disabled) Active: active (running) since <day> <date> 12:39:06 UTC; 47min ago Main PID: 20926 (microshift) Tasks: 14 (limit: 48063) Memory: 542.9M CPU: 2min 41.185s CGroup: /system.slice/microshift.service └─20926 microshift run <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876001 20926 controll> <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876574 20926 controll> # ... Optional: Get comprehensive logs by running the following command: USD sudo journalctl -u microshift Note The default configuration of the systemd journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size. Optional: If MicroShift is running, check the status of active pods by entering the following command: USD oc get pods -A Example output NAMESPACE NAME READY STATUS RESTARTS AGE default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m openshift-dns dns-default-45jl7 2/2 Running 0 50m openshift-dns node-resolver-7wmzf 1/1 Running 0 51m openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m openshift-storage topolvm-node-rht2m 3/3 Running 0 50m Note This example output shows basic MicroShift. If you have installed optional RPMs, the status of pods running those services is also expected to be shown in your output.
|
[
"sudo systemctl status microshift",
"● microshift.service - MicroShift Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; preset: disabled) Active: active (running) since <day> <date> 12:39:06 UTC; 47min ago Main PID: 20926 (microshift) Tasks: 14 (limit: 48063) Memory: 542.9M CPU: 2min 41.185s CGroup: /system.slice/microshift.service └─20926 microshift run <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876001 20926 controll> <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876574 20926 controll>",
"sudo journalctl -u microshift",
"oc get pods -A",
"NAMESPACE NAME READY STATUS RESTARTS AGE default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m openshift-dns dns-default-45jl7 2/2 Running 0 50m openshift-dns node-resolver-7wmzf 1/1 Running 0 51m openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m openshift-storage topolvm-node-rht2m 3/3 Running 0 50m"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-troubleshoot-cluster
|
Chapter 6. Deploy the edge without storage
|
Chapter 6. Deploy the edge without storage You can deploy a distributed compute node (DCN) cluster without block storage at edge sites if you use the Object Storage service (swift) as a back end for the Image service (glance) at the central location. If you deploy a site without block storage, you cannot update it later to have block storage. Use the compute role when deploying the edge site without storage. Important The following procedure uses lvm as the back end for the Block Storage service (cinder), which is not supported for production. You must deploy a certified block storage solution as a back end for the Block Storage service. 6.1. Architecture of a DCN edge site without storage To deploy this architecture, use the Compute role. Without block storage at the edge The Object Storage (swift) service at the control plane is used as an Image (glance) service backend. Multi-backend image service is not available. Images are cached locally at edge sites in Nova. For more information see Chapter 11, Precaching glance images into nova . The instances are stored locally on the Compute nodes. Volume services such as Block Storage (cinder) are not available at edge sites. Important If you do not deploy the central location with Red Hat Ceph storage, you will not have the option of deploying an edge site with storage at a later time. For more information about deploying without block storage at the edge, see Section 6.2, "Deploying edge nodes without storage" . 6.2. Deploying edge nodes without storage When you deploy Compute nodes at an edge site, you use the central location as the control plane. You can add a new DCN stack to your deployment and reuse the configuration files from the central location to create new environment files. Prerequisites You must create the network_data.yaml file specific to your environment. You can find sample files in /usr/share/openstack-tripleo-heat-templates/network-data-samples . You must create an overcloud-baremetal-deploy.yaml file specific to your environment. For more information see Provisioning bare metal nodes for the overcloud . You must upload images to the central location before copying them to edge sites; a copy of each image must exist in the Image service (glance) at the central location. You must use the RBD storage driver for the Image, Compute, and Block Storage services. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate an environment file ~/dcn0/dcn0-images-env.yaml[d]: Generate a roles file for the edge location. Generate roles for the edge location using roles appropriate for your environment: If you are using ML2/OVS for networking overlay, you must edit the Compute role include the NeutronDhcpAgent and NeutronMetadataAgent services: Create a role file for the Compute role: Edit the /home/stack/dcn0/dcn0_roles.yaml file to include the NeutronDhcpAgent and NeutronMetadataAgent services: For more information, see Preparing for a routed provider network . Provision networks for the overcloud. This command takes a definition file for overcloud networks as input. You must use the output file in your command to deploy the overcloud: Important If your network_data.yaml template includes additional networks which were not included when you provisioned networks for the central location, then you must re-run the network provisioning command on the central location: Provision bare metal instances. This command takes a definition file for bare metal nodes as input. You must use the output file in your command to deploy the overcloud: Configure the naming conventions for your site in the site-name.yaml environment file. Deploy the stack for the dcn0 edge site: 6.3. Excluding specific image types at the edge By default, Compute nodes advertise all image formats that they support. If your Compute nodes do not use Ceph storage, you can exclude RAW images from the image format advertisement. The RAW image format consumes more network bandwidth and local storage than QCOW2 images and is inefficient when used at edge sites without Ceph storage. Use the NovaImageTypeExcludeList parameter to exclude specific image formats: Important Do not use this parameter at edge sites with Ceph, because Ceph requires RAW images. Note Compute nodes that do not advertise RAW images cannot host instances created from RAW images. This can affect snapshot-redeploy and shelving. Prerequisites Red Hat OpenStack Platform director is installed The central location is installed Compute nodes are available for a DCN deployment Procedure Log in to the undercloud host as the stack user. Source the stackrc credentials file: Include the NovaImageTypeExcludeList parameter in one of your custom environment files: Include the environment file that contains the NovaImageTypeExcludeList parameter in the overcloud deployment command, along with any other environment files relevant to your deployment:
|
[
"[stack@director ~]USD source ~/stackrc",
"sudo[e] openstack tripleo container image prepare -e containers.yaml --output-env-file ~/dcn0/dcn0-images-env.yaml",
"(undercloud)USD openstack overcloud roles generate Compute -o /home/stack/dcn0/dcn0_roles.yaml",
"openstack overcloud roles generate Compute -o /home/stack/dcn0/dcn0_roles.yaml",
"- OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe + - OS::TripleO::Services::NeutronDhcpAgent + - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaAZConfig - OS::TripleO::Services::NovaCompute",
"(undercloud)USD openstack overcloud network provision --output /home/stack/dcn0/overcloud-networks-deployed.yaml /home/stack/dcn0/network_data.yaml",
"(undercloud)USD openstack overcloud network provision --output /home/stack/central/overcloud-networks-deployed.yaml /home/stack/central/network_data.yaml",
"(undercloud)USD openstack overcloud node provision --stack dcn0 --network-config -o /home/stack/dcn0/deployed_metal.yaml ~/overcloud-baremetal-deploy.yaml",
"parameter_defaults: NovaComputeAvailabilityZone: dcn0 ControllerExtraConfig: nova::availability_zone::default_schedule_zone: dcn0 NovaCrossAZAttach: false",
"openstack overcloud deploy --deployed-server --stack dcn0 --templates /usr/share/openstack-tripleo-heat-templates/ -r /home/stack/dcn0/dcn0_roles.yaml -n /home/stack/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/overcloud-deploy/central/central-export.yaml -e /home/stack/dcn0/overcloud-networks-deployed.yaml -e /home/stack/dcn0/overcloud-vip-deployed.yaml -e /home/stack/dcn0/deployed_metal.yaml",
"source ~/stackrc",
"parameter_defaults: NovaImageTypeExcludeList: - raw",
"openstack overcloud deploy --templates -n network_data.yaml -r roles_data.yaml -e <environment_files> -e <new_environment_file>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/assembly_deploy-edge-without-storage
|
Chapter 2. Restoring Central database by using the roxctl CLI
|
Chapter 2. Restoring Central database by using the roxctl CLI You can use the roxctl CLI to restore Red Hat Advanced Cluster Security for Kubernetes (RHACS) by using the restore command. This command requires an API token or your administrator password. 2.1. Restoring by using an API token You can restore the entire database of RHACS by using an API token. Prerequisites You have a RHACS backup file. You have an API token with the administrator role. You have installed the roxctl CLI. Procedure Set the ROX_API_TOKEN and the ROX_ENDPOINT environment variables by running the following commands: USD export ROX_API_TOKEN=<api_token> USD export ROX_ENDPOINT=<address>:<port_number> Restore the Central database by running the following command: USD roxctl central db restore <backup_file> 1 1 For <backup_file> , specify the name of the backup file that you want to restore. 2.2. Restoring by using the administrator password You can restore the entire database of RHACS by using your administrator password. Prerequisites You have a RHACS backup file. You have the administrator password. You have installed the roxctl CLI. Procedure Set the ROX_ENDPOINT environment variable by running the following command: USD export ROX_ENDPOINT=<address>:<port_number> Restore the Central database by running the following command: USD roxctl -p <admin_password> \ 1 central db restore <backup_file> 2 1 For <admin_password> , specify the administrator password. 2 For <backup_file> , specify the name of the backup file that you want to restore. 2.3. Resuming the restore operation If your connection is interrupted during a restore operation or you need to go offline, you can resume the restore operation. If you do not have access to the machine running the resume operation, you can use the roxctl central db restore status command to check the status of an ongoing restore operation. If the connection is interrupted, the roxctl CLI automatically attempts to restore a task as soon as the connection is available again. The automatic connection retries depend on the duration specified by the timeout option. Use the --timeout option to specify the time in seconds, minutes or hours after which the roxctl CLI stops trying to resume a restore operation. If the option is not specified, the default timeout is 10 minutes. If a restore operation gets stuck or you want to cancel it, use the roxctl central db restore cancel command to cancel a running restore operation. If a restore operation is stuck, you have canceled it, or the time has expired, you can resume the restore by running the original command again. Important During interruptions, RHACS caches an ongoing restore operation for 24 hours. You can resume this operation by executing the original restore command again. The --timeout option only controls the client-side connection retries and has no effect on the server-side restore cache of 24 hours. You cannot resume restores across Central pod restarts. If a restore operation is interrupted, you must restart it within 24 hours and before restarting Central, otherwise RHACS cancels the restore operation.
|
[
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl central db restore <backup_file> 1",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl -p <admin_password> \\ 1 central db restore <backup_file> 2"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/troubleshooting_central/restoring-central-database-by-using-the-roxctl-cli
|
Appendix A. Using your Red Hat subscription
|
Appendix A. Using your Red Hat subscription Red Hat Connectivity Link is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Managing your subscriptions Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. In the menu bar, click Subscriptions to view and manage your subscriptions. Revised on 2025-03-12 11:42:41 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/release_notes_for_connectivity_link_1.0/using_your_subscription
|
8.118. mod_auth_kerb
|
8.118. mod_auth_kerb 8.118.1. RHBA-2013:0860 - mod_auth_kerb bug fix update Updated mod_auth_kerb packages that fix one bug are now available for Red Hat Enterprise Linux 6. The mod_auth_kerb package provides a module for the Apache HTTP Server designed to provide Kerberos authentication over HTTP. The module supports the Negotiate authentication method, which performs full Kerberos authentication based on ticket exchanges. Bug Fix BZ# 867153 Previously, when the KrbLocalUserMapping directive was enabled, mod_auth_kerb did not translate a principal name properly if the local name was of a higher length. Consequently, the Apache server returned the HTTP 500 error in such a scenario. A patch has been provided to address this issue and the module now correctly translates account names longer than their counterpart principal names. Users of mod_auth_kerb are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/mod_auth_kerb
|
Chapter 4. Example workflows for S2I on OpenShift
|
Chapter 4. Example workflows for S2I on OpenShift 4.1. Remote debugging Java application for OpenShift image The example in the procedure shows the remote debugging of a Java application deployed on OpenShift by using the S2I for OpenShift image. You can enable the capability by setting the value of the environment variables JAVA_DEBUG to true and JAVA_DEBUG_PORT to 9009 , respectively. Note If the JAVA_DEBUG variable is set to true and no value is provided for the JAVA_DEBUG_PORT variable, JAVA_DEBUG_PORT is set to 5005 by default. Prepare for deployment Procedure Log in to the OpenShift instance by running following command and by providing your credentials: Create a new project: Deployment You can enable remote debugging for your new and existing applications. Enabling remote debugging for a new application Procedure Create a new application by using the S2I for OpenShift image and example Java source code. Ensure that you set the JAVA_DEBUG and the JAVA_DEBUG_PORT environment variables before creating your application: Proceed to Connect local debugging port to a port on the pod . Enabling remote debugging for an existing application Procedure Switch to the appropriate OpenShift project: Retrieve the name of the deploymentconfig : Edit the deploymentconfig and add the JAVA_DEBUG=true and JAVA_DEBUG_PORT=9009 environment variables. Specify object to edit at the path .spec.template.spec.containers and type of Container : Note Launch an editor to run oc edit command in your terminal. You can change the editor that is launched by defining your environment's EDITOR variable. Proceed to Connect local debugging port to a port on the pod . Post-deployment Connect local debugging port to a port on the pod Procedure Get the name of the pod running the application (Status Running ): Example showing openshift-quickstarts-1-1uymm as the pod name. Use the OpenShift or Kubernetes port forwarding feature to listen on a local port and forward to a port on the OpenShift pod. <running-pod> is the value of the NAME field for the pod with Status "running" from the command output: Note In the example, 5005 is the port number on the local system, while 9009 is the remote port number of the OpenShift pod running the S2I for OpenShift image. Therefore, future debugging connections made to local port 5005 are forwarded to port 9009 of the OpenShift pod, running the Java Virtual Machine (JVM). Important The command might prevent you from typing further in the terminal. In this case, launch a new terminal for performing the steps. Attach debugger to an application Procedure Attach the debugger on the local system to the remote JVM running on the S2I for OpenShift image: Note Once the local debugger to the remote OpenShift pod debugging connection is initiated, an entry similar to handling connection for 5005 is shown in the console where the oc port-forward command was issued. Debug the application: Additional resources For more information on Openshift common object reference, see the OpenShift Common Object Reference, section Container . For more information on connecting the IDE debugger of the Red Hat JBoss Developer Studio to the OpenShift pod running the S2I for OpenShift image, see Configuring and Connecting the IDE Debugger . 4.2. Running flat classpath JAR on source-to-image for OpenShift The example in the procedure describes the process of running flat classpath java applications on S2I for OpenShift. Prepare for Deployment Procedure Log in to the OpenShift instance by providing your credentials: Create a new project: Deployment Procedure Create a new application using the S2I for OpenShift image and Java source code: Post-deployment Procedure Get the service name: Expose the service as a route to be able to use it from the browser: Get the route: Access the application in your browser by using the URL (value of HOST/PORT field from command output).
|
[
"oc login",
"oc new-project js2i-remote-debug-demo",
"oc new-app --context-dir=getting-started --name=quarkus-quickstart 'registry.access.redhat.com/ubi8/openjdk-11~https://github.com/quarkusio/quarkus-quickstarts.git#2.12.1.Final' -e JAVA_DEBUG=true -e JAVA_DEBUG_PORT=9009",
"oc project js2i-remote-debug-demo",
"oc get dc -o name deploymentconfig/openshift-quickstarts",
"oc edit dc/openshift-quickstarts",
"oc get pods NAME READY STATUS RESTARTS AGE openshift-quickstarts-1-1uymm 1/1 Running 0 3m openshift-quickstarts-1-build 0/1 Completed 0 6m",
"oc port-forward <running-pod> 5005:9009 Forwarding from 127.0.0.1:5005 -> 9009 Forwarding from [::1]:5005 -> 9009",
"jdb -attach 5005 Set uncaught java.lang.Throwable Set deferred uncaught java.lang.Throwable Initializing jdb >",
"jdb -attach 5005 Set uncaught java.lang.Throwable Set deferred uncaught java.lang.Throwable Initializing jdb > threads Group system: (java.lang.ref.ReferenceUSDReferenceHandler)0x79e Reference Handler cond. waiting (java.lang.ref.FinalizerUSDFinalizerThread)0x79f Finalizer cond. waiting (java.lang.Thread)0x7a0 Signal Dispatcher running Group main: (java.util.TimerThread)0x7a2 server-timer cond. waiting (org.jolokia.jvmagent.CleanupThread)0x7a3 Jolokia Agent Cleanup Thread cond. waiting (org.xnio.nio.WorkerThread)0x7a4 XNIO-1 I/O-1 running (org.xnio.nio.WorkerThread)0x7a5 XNIO-1 I/O-2 running (org.xnio.nio.WorkerThread)0x7a6 XNIO-1 I/O-3 running (org.xnio.nio.WorkerThread)0x7a7 XNIO-1 Accept running (java.lang.Thread)0x7a8 DestroyJavaVM running Group jolokia: (java.lang.Thread)0x7aa Thread-3 running >",
"oc login",
"oc new-project js2i-flatclasspath-demo",
"oc new-app --context-dir=getting-started --name=quarkus-quickstart 'registry.access.redhat.com/ubi8/openjdk-11~https://github.com/quarkusio/quarkus-quickstarts.git#2.12.1.Final'",
"oc get svc",
"oc expose svc/openshift-quickstarts --port=8080",
"oc get route"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_8/example-workflow-for-s2i-openshift
|
Chapter 9. Creating quick start tutorials in the web console
|
Chapter 9. Creating quick start tutorials in the web console If you are creating quick start tutorials for the OpenShift Container Platform web console, follow these guidelines to maintain a consistent user experience across all quick starts. 9.1. Understanding quick starts A quick start is a guided tutorial with user tasks. In the web console, you can access quick starts under the Help menu. They are especially useful for getting oriented with an application, Operator, or other product offering. A quick start primarily consists of tasks and steps. Each task has multiple steps, and each quick start has multiple tasks. For example: Task 1 Step 1 Step 2 Step 3 Task 2 Step 1 Step 2 Step 3 Task 3 Step 1 Step 2 Step 3 9.2. Quick start user workflow When you interact with an existing quick start tutorial, this is the expected workflow experience: In the Administrator or Developer perspective, click the Help icon and select Quick Starts . Click a quick start card. In the panel that appears, click Start . Complete the on-screen instructions, then click . In the Check your work module that appears, answer the question to confirm that you successfully completed the task. If you select Yes , click to continue to the task. If you select No , repeat the task instructions and check your work again. Repeat steps 1 through 6 above to complete the remaining tasks in the quick start. After completing the final task, click Close to close the quick start. 9.3. Quick start components A quick start consists of the following sections: Card : The catalog tile that provides the basic information of the quick start, including title, description, time commitment, and completion status Introduction : A brief overview of the goal and tasks of the quick start Task headings : Hyper-linked titles for each task in the quick start Check your work module : A module for a user to confirm that they completed a task successfully before advancing to the task in the quick start Hints : An animation to help users identify specific areas of the product Buttons and back buttons : Buttons for navigating the steps and modules within each task of a quick start Final screen buttons : Buttons for closing the quick start, going back to tasks within the quick start, and viewing all quick starts The main content area of a quick start includes the following sections: Card copy Introduction Task steps Modals and in-app messaging Check your work module 9.4. Contributing quick starts OpenShift Container Platform introduces the quick start custom resource, which is defined by a ConsoleQuickStart object. Operators and administrators can use this resource to contribute quick starts to the cluster. Prerequisites You must have cluster administrator privileges. Procedure To create a new quick start, run: USD oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml Run: USD oc create -f my-quick-start.yaml Update the YAML file using the guidance outlined in this documentation. Save your edits. 9.4.1. Viewing the quick start API documentation Procedure To see the quick start API documentation, run: USD oc explain consolequickstarts Run oc explain -h for more information about oc explain usage. 9.4.2. Mapping the elements in the quick start to the quick start CR This section helps you visually map parts of the quick start custom resource (CR) with where they appear in the quick start within the web console. 9.4.2.1. conclusion element Viewing the conclusion element in the YAML file ... summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1 1 conclusion text Viewing the conclusion element in the web console The conclusion appears in the last section of the quick start. 9.4.2.2. description element Viewing the description element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1 ... 1 description text Viewing the description element in the web console The description appears on the introductory tile of the quick start on the Quick Starts page. 9.4.2.3. displayName element Viewing the displayName element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10 1 displayName text. Viewing the displayName element in the web console The display name appears on the introductory tile of the quick start on the Quick Starts page. 9.4.2.4. durationMinutes element Viewing the durationMinutes element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1 1 durationMinutes value, in minutes. This value defines how long the quick start should take to complete. Viewing the durationMinutes element in the web console The duration minutes element appears on the introductory tile of the quick start on the Quick Starts page. 9.4.2.5. icon element Viewing the icon element in the YAML file ... spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg== ... 1 The icon defined as a base64 value. Viewing the icon element in the web console The icon appears on the introductory tile of the quick start on the Quick Starts page. 9.4.2.6. introduction element Viewing the introduction element in the YAML file ... introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural "Spring on OpenShift" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift ... 1 The introduction introduces the quick start and lists the tasks within it. Viewing the introduction element in the web console After clicking a quick start card, a side panel slides in that introduces the quick start and lists the tasks within it. 9.4.3. Adding a custom icon to a quick start A default icon is provided for all quick starts. You can provide your own custom icon. Procedure Find the .svg file that you want to use as your custom icon. Use an online tool to convert the text to base64 . In the YAML file, add icon: >- , then on the line include data:image/svg+xml;base64 followed by the output from the base64 conversion. For example: icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld. 9.4.4. Limiting access to a quick start Not all quick starts should be available for everyone. The accessReviewResources section of the YAML file provides the ability to limit access to the quick start. To only allow the user to access the quick start if they have the ability to create HelmChartRepository resources, use the following configuration: accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create To only allow the user to access the quick start if they have the ability to list Operator groups and package manifests, thus ability to install Operators, use the following configuration: accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list 9.4.5. Linking to other quick starts Procedure In the nextQuickStart section of the YAML file, provide the name , not the displayName , of the quick start to which you want to link. For example: nextQuickStart: - add-healthchecks 9.4.6. Supported tags for quick starts Write your quick start content in markdown using these tags. The markdown is converted to HTML. Tag Description 'b', Defines bold text. 'img', Embeds an image. 'i', Defines italic text. 'strike', Defines strike-through text. 's', Defines smaller text 'del', Defines smaller text. 'em', Defines emphasized text. 'strong', Defines important text. 'a', Defines an anchor tag. 'p', Defines paragraph text. 'h1', Defines a level 1 heading. 'h2', Defines a level 2 heading. 'h3', Defines a level 3 heading. 'h4', Defines a level 4 heading. 'ul', Defines an unordered list. 'ol', Defines an ordered list. 'li', Defines a list item. 'code', Defines a text as code. 'pre', Defines a block of preformatted text. 'button', Defines a button in text. 9.4.7. Quick start highlighting markdown reference The highlighting, or hint, feature enables Quick Starts to contain a link that can highlight and animate a component of the web console. The markdown syntax contains: Bracketed link text The highlight keyword, followed by the ID of the element that you want to animate 9.4.7.1. Perspective switcher [Perspective switcher]{{highlight qs-perspective-switcher}} 9.4.7.2. Administrator perspective navigation links [Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}} 9.4.7.3. Developer perspective navigation links [Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}} 9.4.7.4. Common navigation links [Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}} 9.4.7.5. Masthead links [CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}} 9.4.8. Code snippet markdown reference You can execute a CLI code snippet when it is included in a quick start from the web console. To use this feature, you must first install the Web Terminal Operator. The web terminal and code snippet actions that execute in the web terminal are not present if you do not install the Web Terminal Operator. Alternatively, you can copy a code snippet to the clipboard regardless of whether you have the Web Terminal Operator installed or not. 9.4.8.1. Syntax for inline code snippets Note If the execute syntax is used, the Copy to clipboard action is present whether you have the Web Terminal Operator installed or not. 9.4.8.2. Syntax for multi-line code snippets 9.5. Quick start content guidelines 9.5.1. Card copy You can customize the title and description on a quick start card, but you cannot customize the status. Keep your description to one to two sentences. Start with a verb and communicate the goal of the user. Correct example: 9.5.2. Introduction After clicking a quick start card, a side panel slides in that introduces the quick start and lists the tasks within it. Make your introduction content clear, concise, informative, and friendly. State the outcome of the quick start. A user should understand the purpose of the quick start before they begin. Give action to the user, not the quick start. Correct example : Incorrect example : The introduction should be a maximum of four to five sentences, depending on the complexity of the feature. A long introduction can overwhelm the user. List the quick start tasks after the introduction content, and start each task with a verb. Do not specify the number of tasks because the copy would need to be updated every time a task is added or removed. Correct example : Incorrect example : 9.5.3. Task steps After the user clicks Start , a series of steps appears that they must perform to complete the quick start. Follow these general guidelines when writing task steps: Use "Click" for buttons and labels. Use "Select" for checkboxes, radio buttons, and drop-down menus. Use "Click" instead of "Click on" Correct example : Incorrect example : Tell users how to navigate between Administrator and Developer perspectives. Even if you think a user might already be in the appropriate perspective, give them instructions on how to get there so that they are definitely where they need to be. Examples: Use the "Location, action" structure. Tell a user where to go before telling them what to do. Correct example : Incorrect example : Keep your product terminology capitalization consistent. If you must specify a menu type or list as a dropdown, write "dropdown" as one word without a hyphen. Clearly distinguish between a user action and additional information on product functionality. User action : Additional information : Avoid directional language, like "In the top-right corner, click the icon". Directional language becomes outdated every time UI layouts change. Also, a direction for desktop users might not be accurate for users with a different screen size. Instead, identify something using its name. Correct example : Incorrect example : Do not identify items by color alone, like "Click the gray circle". Color identifiers are not useful for sight-limited users, especially colorblind users. Instead, identify an item using its name or copy, like button copy. Correct example : Incorrect example : Use the second-person point of view, you, consistently: Correct example : Incorrect example : 9.5.4. Check your work module After a user completes a step, a Check your work module appears. This module prompts the user to answer a yes or no question about the step results, which gives them the opportunity to review their work. For this module, you only need to write a single yes or no question. If the user answers Yes , a check mark will appear. If the user answers No , an error message appears with a link to relevant documentation, if necessary. The user then has the opportunity to go back and try again. 9.5.5. Formatting UI elements Format UI elements using these guidelines: Copy for buttons, dropdowns, tabs, fields, and other UI controls: Write the copy as it appears in the UI and bold it. All other UI elements-including page, window, and panel names: Write the copy as it appears in the UI and bold it. Code or user-entered text: Use monospaced font. Hints: If a hint to a navigation or masthead element is included, style the text as you would a link. CLI commands: Use monospaced font. In running text, use a bold, monospaced font for a command. If a parameter or option is a variable value, use an italic monospaced font. Use a bold, monospaced font for the parameter and a monospaced font for the option. 9.6. Additional resources For voice and tone requirements, refer to PatternFly's brand voice and tone guidelines . For other UX content guidance, refer to all areas of PatternFly's UX writing style guide .
|
[
"oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml",
"oc create -f my-quick-start.yaml",
"oc explain consolequickstarts",
"summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1",
"spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg==",
"introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural \"Spring on OpenShift\" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift",
"icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld.",
"accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create",
"accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list",
"nextQuickStart: - add-healthchecks",
"[Perspective switcher]{{highlight qs-perspective-switcher}}",
"[Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}}",
"[Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}}",
"[Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}}",
"[CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}}",
"`code block`{{copy}} `code block`{{execute}}",
"``` multi line code block ```{{copy}} ``` multi line code block ```{{execute}}",
"Create a serverless application.",
"In this quick start, you will deploy a sample application to {product-title}.",
"This quick start shows you how to deploy a sample application to {product-title}.",
"Tasks to complete: Create a serverless application; Connect an event source; Force a new revision",
"You will complete these 3 tasks: Creating a serverless application; Connecting an event source; Forcing a new revision",
"Click OK.",
"Click on the OK button.",
"Enter the Developer perspective: In the main navigation, click the dropdown menu and select Developer. Enter the Administrator perspective: In the main navigation, click the dropdown menu and select Admin.",
"In the node.js deployment, hover over the icon.",
"Hover over the icon in the node.js deployment.",
"Change the time range of the dashboard by clicking the dropdown menu and selecting time range.",
"To look at data in a specific time frame, you can change the time range of the dashboard.",
"In the navigation menu, click Settings.",
"In the left-hand menu, click Settings.",
"The success message indicates a connection.",
"The message with a green icon indicates a connection.",
"Set up your environment.",
"Let's set up our environment."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/web_console/creating-quick-start-tutorials
|
11.3.2. Removing an LVM2 Logical Volume for Swap
|
11.3.2. Removing an LVM2 Logical Volume for Swap The swap logical volume cannot be in use (no system locks or processes on the volume). The easiest way to achieve this it to boot your system in rescue mode. Refer to Chapter 5, Basic System Recovery for instructions on booting into rescue mode. When prompted to mount the file system, select Skip . To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to remove): Disable swapping for the associated logical volume: Remove the LVM2 logical volume of size 512 MB: Remove the following entry from the /etc/fstab file: Test that the logical volume has been extended properly:
|
[
"swapoff -v /dev/VolGroup00/LogVol02",
"lvm lvremove /dev/VolGroup00/LogVol02",
"/dev/VolGroup00/LogVol02 swap swap defaults 0 0",
"cat /proc/swaps # free"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/removing_swap_space-removing_an_lvm2_logical_volume_for_swap
|
Chapter 10. Managing user-provisioned infrastructure manually
|
Chapter 10. Managing user-provisioned infrastructure manually 10.1. Adding compute machines to clusters with user-provisioned infrastructure manually You can add compute machines to a cluster on user-provisioned infrastructure either as part of the installation process or after installation. The post-installation process requires some of the same configuration files and parameters that were used during installation. 10.1.1. Adding compute machines to Amazon Web Services To add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS), see Adding compute machines to AWS by using CloudFormation templates . 10.1.2. Adding compute machines to Microsoft Azure To add more compute machines to your OpenShift Container Platform cluster on Microsoft Azure, see Creating additional worker machines in Azure . 10.1.3. Adding compute machines to Azure Stack Hub To add more compute machines to your OpenShift Container Platform cluster on Azure Stack Hub, see Creating additional worker machines in Azure Stack Hub . 10.1.4. Adding compute machines to Google Cloud Platform To add more compute machines to your OpenShift Container Platform cluster on Google Cloud Platform (GCP), see Creating additional worker machines in GCP . 10.1.5. Adding compute machines to vSphere You can use compute machine sets to automate the creation of additional compute machines for your OpenShift Container Platform cluster on vSphere. To manually add more compute machines to your cluster, see Adding compute machines to vSphere manually . 10.1.6. Adding compute machines to bare metal To add more compute machines to your OpenShift Container Platform cluster on bare metal, see Adding compute machines to bare metal . 10.2. Adding compute machines to AWS by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. 10.2.1. Prerequisites You installed your cluster on AWS by using the provided AWS CloudFormation templates . You have the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. If you do not have these files, you must recreate them by following the instructions in the installation procedure . 10.2.2. Adding more compute machines to your AWS cluster by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. Important The CloudFormation template creates a stack that represents one compute machine. You must create a stack for each compute machine. Note If you do not use the provided CloudFormation template to create your compute nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You installed an OpenShift Container Platform cluster by using CloudFormation templates and have access to the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. You installed the AWS CLI. Procedure Create another compute stack. Launch the template: USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-workers . You must provide the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create compute stacks until you have created enough compute machines for your cluster. 10.2.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.3. Adding compute machines to vSphere manually You can add more compute machines to your OpenShift Container Platform cluster on VMware vSphere manually. Note You can also use compute machine sets to automate the creation of additional VMware vSphere compute machines for your cluster. 10.3.1. Prerequisites You installed a cluster on vSphere . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 10.3.2. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 10.3.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.4. Adding compute machines to bare metal You can add more compute machines to your OpenShift Container Platform cluster on bare metal. 10.4.1. Prerequisites You installed a cluster on bare metal . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . If a DHCP server is available for your user-provisioned infrastructure, you have added the details for the additional compute machines to your DHCP server configuration. This includes a persistent IP address, DNS server information, and a hostname for each machine. You have updated your DNS configuration to include the record name and IP address of each compute machine that you are adding. You have validated that DNS lookup and reverse DNS lookup resolve correctly. Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 10.4.2. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. Note You must use the same ISO image that you used to install a cluster to deploy all new nodes in a cluster. It is recommended to use the same Ignition config file. The nodes automatically upgrade themselves on the first boot before running the workloads. You can add the nodes before or after the upgrade. 10.4.2.1. Creating more RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Procedure Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 10.4.2.2. Creating more RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. 2 Specify the location of the initramfs file that you uploaded to your HTTP server. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 10.4.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests .
|
[
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"aws cloudformation describe-stacks --stack-name <name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/machine_management/managing-user-provisioned-infrastructure-manually
|
Building applications
|
Building applications OpenShift Container Platform 4.14 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/index
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_and_monitoring_security_updates/proc_providing-feedback-on-red-hat-documentation_managing-and-monitoring-security-updates
|
Chapter 5. Scaling a user-provisioned cluster with the Bare Metal Operator
|
Chapter 5. Scaling a user-provisioned cluster with the Bare Metal Operator After deploying a user-provisioned infrastructure cluster, you can use the Bare Metal Operator (BMO) and other metal 3 components to scale bare-metal hosts in the cluster. This approach helps you to scale a user-provisioned cluster in a more automated way. 5.1. About scaling a user-provisioned cluster with the Bare Metal Operator You can scale user-provisioned infrastructure clusters by using the Bare Metal Operator (BMO) and other metal 3 components. User-provisioned infrastructure installations do not feature the Machine API Operator. The Machine API Operator typically manages the lifecycle of bare-metal nodes in a cluster. However, it is possible to use the BMO and other metal 3 components to scale nodes in user-provisioned clusters without requiring the Machine API Operator. 5.1.1. Prerequisites for scaling a user-provisioned cluster You installed a user-provisioned infrastructure cluster on bare metal. You have baseboard management controller (BMC) access to the hosts. 5.1.2. Limitations for scaling a user-provisioned cluster You cannot use a provisioning network to scale user-provisioned infrastructure clusters by using the Bare Metal Operator (BMO). Consequentially, you can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia . You cannot scale MachineSet objects in user-provisioned infrastructure clusters by using the BMO. 5.2. Configuring a provisioning resource to scale user-provisioned clusters Create a Provisioning custom resource (CR) to enable Metal platform components on a user-provisioned infrastructure cluster. Prerequisites You installed a user-provisioned infrastructure cluster on bare metal. Procedure Create a Provisioning CR. Save the following YAML in the provisioning.yaml file: apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: "Disabled" watchAllNamespaces: false Note OpenShift Container Platform 4.15 does not support enabling a provisioning network when you scale a user-provisioned cluster by using the Bare Metal Operator. Create the Provisioning CR by running the following command: USD oc create -f provisioning.yaml Example output provisioning.metal3.io/provisioning-configuration created Verification Verify that the provisioning service is running by running the following command: USD oc get pods -n openshift-machine-api Example output NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h 5.3. Provisioning new hosts in a user-provisioned cluster by using the BMO You can use the Bare Metal Operator (BMO) to provision bare-metal hosts in a user-provisioned cluster by creating a BareMetalHost custom resource (CR). Note Provisioning bare-metal hosts to the cluster by using the BMO sets the spec.externallyProvisioned specification in the BareMetalHost custom resource to false by default. Do not set the spec.externallyProvisioned specification to true , because this setting results in unexpected behavior. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Create a configuration file for the bare-metal node. Depending if you use either a static configuration or a DHCP server, choose one of the following example bmh.yaml files and configure it to your needs by replacing values in the YAML to match your environment: To deploy with a static configuration, create the following bmh.yaml file: --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 7 -hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 1 Replace all instances of <num> with a unique compute node number for the bare-metal nodes in the name , credentialsName , and preprovisioningNetworkDataName fields. 2 Add the NMState YAML syntax to configure the host interfaces. To configure the network interface for a newly created node, specify the name of the secret that has the network configuration. Follow the nmstate syntax to define the network configuration for your node. See "Preparing the bare-metal node" for details on configuring NMState syntax. 3 Optional: If you have configured the network interface with nmstate , and you want to disable an interface, set state: up with the IP addresses set to enabled: false . 4 Replace <nic1_name> with the name of the bare-metal node's first network interface controller (NIC). 5 Replace <ip_address> with the IP address of the bare-metal node's NIC. 6 Replace <dns_ip_address> with the IP address of the bare-metal node's DNS resolver. 7 Replace <next_hop_ip_address> with the IP address of the bare-metal node's external gateway. 8 Replace <next_hop_nic1_name> with the name of the bare-metal node's external gateway. 9 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 10 Replace <nic1_mac_address> with the MAC address of the bare-metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 11 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace <bmc_url> with the URL of the bare-metal node's baseboard management controller. 12 Optional: Replace <root_device_hint> with a device path when specifying a root device hint. See "Root device hints" for additional details. When configuring the network interface with a static configuration by using nmstate , set state: up with the IP addresses set to enabled: false : --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # ... interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false # ... To deploy with a DHCP configuration, create the following bmh.yaml file: --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5 1 Replace <num> with a unique compute node number for the bare-metal nodes in the name and credentialsName fields. 2 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 3 Replace <nic1_mac_address> with the MAC address of the bare-metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 4 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace <bmc_url> with the URL of the bare-metal node's baseboard management controller. 5 Optional: Replace <root_device_hint> with a device path when specifying a root device hint. See "Root device hints" for additional details. Important If the MAC address of an existing bare-metal node matches the MAC address of the bare-metal host that you are attempting to provision, then the installation will fail. If the host enrollment, inspection, cleaning, or other steps fail, the Bare Metal Operator retries the installation continuously. See "Diagnosing a duplicate MAC address when provisioning a new host in the cluster" for additional details. Create the bare-metal node by running the following command: USD oc create -f bmh.yaml Example output secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Inspect the bare-metal node by running the following command: USD oc -n openshift-machine-api get bmh openshift-worker-<num> where: <num> Specifies the compute node number. Example output NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true Approve all certificate signing requests (CSRs). Get the list of pending CSRs by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending Approve the CSR by running the following command: USD oc adm certificate approve <csr_name> Example output certificatesigningrequest.certificates.k8s.io/<csr_name> approved Verification Verify that the node is ready by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd Additional resources Preparing the bare-metal node Root device hints Diagnosing a duplicate MAC address when provisioning a new host in the cluster 5.4. Optional: Managing existing hosts in a user-provisioned cluster by using the BMO Optionally, you can use the Bare Metal Operator (BMO) to manage existing bare-metal controller hosts in a user-provisioned cluster by creating a BareMetalHost object for the existing host. It is not a requirement to manage existing user-provisioned hosts; however, you can enroll them as externally-provisioned hosts for inventory purposes. Important To manage existing hosts by using the BMO, you must set the spec.externallyProvisioned specification in the BareMetalHost custom resource to true to prevent the BMO from re-provisioning the host. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Create the Secret CR and the BareMetalHost CR. Save the following YAML in the controller.yaml file: --- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: "controller1-bmc" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api 1 You can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia . 2 You must set the value to true to prevent the BMO from re-provisioning the bare-metal controller host. Create the bare-metal host object by running the following command: USD oc create -f controller.yaml Example output secret/controller1-bmc created baremetalhost.metal3.io/controller1 created Verification Verify that the BMO created the bare-metal host object by running the following command: USD oc get bmh -A Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s 5.5. Removing hosts from a user-provisioned cluster by using the BMO You can use the Bare Metal Operator (BMO) to remove bare-metal hosts from a user-provisioned cluster. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Cordon and drain the node by running the following command: USD oc adm drain app1 --force --ignore-daemonsets=true Example output node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained Delete the customDeploy specification from the BareMetalHost CR. Edit the BareMetalHost CR for the host by running the following command: USD oc edit bmh -n openshift-machine-api <host_name> Delete the lines spec.customDeploy and spec.customDeploy.method : ... customDeploy: method: install_coreos Verify that the provisioning state of the host changes to deprovisioning by running the following command: USD oc get bmh -A Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m Delete the host by running the following command when the BareMetalHost state changes to available : USD oc delete bmh -n openshift-machine-api <bmh_name> Note You can run this step without having to edit the BareMetalHost CR. It might take some time for the BareMetalHost state to change from deprovisioning to available . Delete the node by running the following command: USD oc delete node <node_name> Verification Verify that you deleted the node by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd
|
[
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: false",
"oc create -f provisioning.yaml",
"provisioning.metal3.io/provisioning-configuration created",
"oc get pods -n openshift-machine-api",
"NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 7 next-hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5",
"oc create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending",
"oc adm certificate approve <csr_name>",
"certificatesigningrequest.certificates.k8s.io/<csr_name> approved",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd",
"--- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: \"controller1-bmc\" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api",
"oc create -f controller.yaml",
"secret/controller1-bmc created baremetalhost.metal3.io/controller1 created",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s",
"oc adm drain app1 --force --ignore-daemonsets=true",
"node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained",
"oc edit bmh -n openshift-machine-api <host_name>",
"customDeploy: method: install_coreos",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m",
"oc delete bmh -n openshift-machine-api <bmh_name>",
"oc delete node <node_name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_bare_metal/scaling-a-user-provisioned-cluster-with-the-bare-metal-operator
|
Management of security keys and certificates with the TLS Registry
|
Management of security keys and certificates with the TLS Registry Red Hat build of Quarkus 3.15 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/management_of_security_keys_and_certificates_with_the_tls_registry/index
|
Chapter 2. Preparing software for RPM packaging
|
Chapter 2. Preparing software for RPM packaging This section explains how to prepare software for RPM packaging. To do so, knowing how to code is not necessary. However, you need to understand the basic concepts, such as What source code is and How programs are made . 2.1. What source code is This part explains what source code is and shows example source codes of a program written in three different programming languages. Source code is human-readable instructions to the computer, which describe how to perform a computation. Source code is expressed using a programming language. 2.1.1. Source code examples This document features three versions of the Hello World program written in three different programming languages: Section 2.1.1.1, "Hello World written in bash" Section 2.1.1.2, "Hello World written in Python" Section 2.1.1.3, "Hello World written in C" Each version is packaged differently. These versions of the Hello World program cover the three major use cases of an RPM packager. 2.1.1.1. Hello World written in bash The bello project implements Hello World in bash . The implementation only contains the bello shell script. The purpose of the program is to output Hello World on the command line. The bello file has the following syntax: 2.1.1.2. Hello World written in Python The pello project implements Hello World in Python . The implementation only contains the pello.py program. The purpose of the program is to output Hello World on the command line. The pello.py file has the following syntax: 2.1.1.3. Hello World written in C The cello project implements Hello World in C. The implementation only contains the cello.c and the Makefile files, so the resulting tar.gz archive will have two files apart from the LICENSE file. The purpose of the program is to output Hello World on the command line. The cello.c file has the following syntax: 2.2. How programs are made Methods of conversion from human-readable source code to machine code (instructions that the computer follows to execute the program) include the following: The program is natively compiled. The program is interpreted by raw interpreting. The program is interpreted by byte compiling. 2.2.1. Natively Compiled Code Natively compiled software is software written in a programming language that compiles to machine code with a resulting binary executable file. Such software can be run stand-alone. RPM packages built this way are architecture-specific. If you compile such software on a computer that uses a 64-bit (x86_64) AMD or Intel processor, it does not execute on a 32-bit (x86) AMD or Intel processor. The resulting package has architecture specified in its name. 2.2.2. Interpreted Code Some programming languages, such as bash or Python , do not compile to machine code. Instead, their programs' source code is executed step by step, without prior transformations, by a Language Interpreter or a Language Virtual Machine. Software written entirely in interpreted programming languages is not architecture-specific. Hence, the resulting RPM Package has the noarch string in its name. Interpreted languages are either Raw-interpreted programs or Byte-compiled programs . These two types differ in program build process and in packaging procedure. 2.2.2.1. Raw-interpreted programs Raw-interpreted language programs do not need to be compiled and are directly executed by the interpreter. 2.2.2.2. Byte-compiled programs Byte-compiled languages need to be compiled into byte code, which is then executed by the language virtual machine. Note Some languages offer a choice: they can be raw-interpreted or byte-compiled. 2.3. Building software from source This part describes how to build software from source code. For software written in compiled languages, the source code goes through a build process, producing machine code. This process, commonly called compiling or translating, varies for different languages. The resulting built software can be run, which makes the computer perform the task specified by the programmer. For software written in raw interpreted languages, the source code is not built, but executed directly. For software written in byte-compiled interpreted languages, the source code is compiled into byte code, which is then executed by the language virtual machine. 2.3.1. Natively Compiled Code This section shows how to build the cello.c program written in the C language into an executable. cello.c 2.3.1.1. Manual building If you want to build the cello.c program manually, use this procedure: Procedure Invoke the C compiler from the GNU Compiler Collection to compile the source code into binary: Execute the resulting output binary cello : 2.3.1.2. Automated building Large-scale software commonly uses automated building that is done by creating the Makefile file and then running the GNU make utility. If you want to use the automated building to build the cello.c program, use this procedure: Procedure To set up automated building, create the Makefile file with the following content in the same directory as cello.c . Makefile Note that the lines under cello: and clean: must begin with a tab space. To build the software, run the make command: Since there is already a build available, run the make clean command, and after run the make command again: Note Trying to build the program after another build has no effect. Execute the program: You have now compiled a program both manually and using a build tool. 2.3.2. Interpreting code This section shows how to byte-compile a program written in Python and raw-interpret a program written in bash . Note In the two examples below, the #! line at the top of the file is known as a shebang , and is not part of the programming language source code. The shebang enables using a text file as an executable: the system program loader parses the line containing the shebang to get a path to the binary executable, which is then used as the programming language interpreter. The functionality requires the text file to be marked as executable. 2.3.2.1. Byte-compiling code This section shows how to compile the pello.py program written in Python into byte code, which is then executed by the Python language virtual machine. Python source code can also be raw-interpreted, but the byte-compiled version is faster. Hence, RPM Packagers prefer to package the byte-compiled version for distribution to end users. pello.py Procedure for byte-compiling programs varies depending on the following factors: Programming language Language's virtual machine Tools and processes used with that language Note Python is often byte-compiled, but not in the way described here. The following procedure aims not to conform to the community standards, but to be simple. For real-world Python guidelines, see Software Packaging and Distribution . Use this procedure to compile pello.py into byte code: Procedure Byte-compile the pello.py file: Execute the byte code in pello.pyc : 2.3.2.2. Raw-interpreting code This section shows how to raw-interpret the bello program written in the bash shell built-in language. bello Programs written in shell scripting languages, like bash , are raw-interpreted. Procedure Make the file with source code executable and run it: 2.4. Patching software This section explains how to patch the software. In RPM packaging, instead of modifying the original source code, we keep it, and use patches on it. A patch is a source code that updates other source code. It is formatted as a diff , because it represents what is different between two versions of the text. A diff is created using the diff utility, which is then applied to the source code using the patch utility. Note Software developers often use Version Control Systems such as git to manage their code base. Such tools provide their own methods of creating diffs or patching software. The following example shows how to create a patch from the original source code using diff , and how to apply the patch using patch . Patching is used in a later section when creating an RPM; see Section 3.2, "Working with SPEC files" . This procedure shows how to create a patch from the original source code for cello.c . Procedure Preserve the original source code: The -p option is used to preserve mode, ownership, and timestamps. Modify cello.c as needed: Generate a patch using the diff utility: Lines starting with a - are removed from the original source code and replaced with the lines that start with + . Using the Naur options with the diff command is recommended because it fits the majority of usual use cases. However, in this particular case, only the -u option is necessary. Particular options ensure the following: -N (or --new-file ) - Handles absent files as if they were empty files. -a (or --text ) - Treats all files as text. As a result, the files that diff classifies as binaries are not ignored. -u (or -U NUM or --unified[=NUM] ) - Returns output in the form of output NUM (default 3) lines of unified context. This is an easily readable format that allows fuzzy matching when applying the patch to a changed source tree. -r (or --recursive ) - Recursively compares any subdirectories that are found. For more information on common arguments for the diff utility, see the diff manual page. Save the patch to a file: Restore the original cello.c : The original cello.c must be retained, because when an RPM is built, the original file is used, not the modified one. For more information, see Section 3.2, "Working with SPEC files" . The following procedure shows how to patch cello.c using cello-output-first-patch.patch , built the patched program, and run it. Redirect the patch file to the patch command: Check that the contents of cello.c now reflect the patch: Build and run the patched cello.c : 2.5. Installing arbitrary artifacts Unix-like systems use the Filesystem Hierarchy Standard (FHS) to specify a directory suitable for a particular file. Files installed from the RPM packages are placed according to FHS. For example, an executable file should go into a directory that is in the system USDPATH variable. In the context of this documentation, an Arbitrary Artifact is anything installed from an RPM to the system. For RPM and for the system it can be a script, a binary compiled from the package's source code, a pre-compiled binary, or any other file. This section describes two common ways of placing Arbitrary Artifacts in the system: Section 2.5.1, "Using the install command" Section 2.5.2, "Using the make install command" 2.5.1. Using the install command Packagers often use the install command in cases when build automation tooling such as GNU make is not optimal; for example if the packaged program does not need extra overhead. The install command is provided to the system by coreutils , which places the artifact to the specified directory in the file system with a specified set of permissions. The following procedure uses the bello file that was previously created as the arbitrary artifact as a subject to this installation method. Procedure Run the install command to place the bello file into the /usr/bin directory with permissions common for executable scripts: As a result, bello is now located in the directory that is listed in the USDPATH variable. Execute bello from any directory without specifying its full path: 2.5.2. Using the make install command Using the make install command is an automated way to install built software to the system. In this case, you need to specify how to install the arbitrary artifacts to the system in the Makefile that is usually written by the developer. This procedure shows how to install a build artifact into a chosen location on the system. Procedure Add the install section to the Makefile : Makefile Note that the lines under cello: , clean: , and install: must begin with a tab space. Note The USD(DESTDIR) variable is a GNU make built-in and is commonly used to specify installation to a directory different than the root directory. Now you can use Makefile not only to build software, but also to install it to the target system. Build and install the cello.c program: As a result, cello is now located in the directory that is listed in the USDPATH variable. Execute cello from any directory without specifying its full path: 2.6. Preparing source code for packaging Developers often distribute software as compressed archives of source code, which are then used to create packages. RPM packagers work with a ready source code archive. Software should be distributed with a software license. This procedure uses the GPLv3 license text as an example content of the LICENSE file. Procedure Create a LICENSE file, and make sure that it includes the following content: Additional resources The code created in this section can be found here . 2.7. Putting source code into tarball This section describes how to put each of the three Hello World programs introduced in Section 2.1.1, "Source code examples" into a gzip -compressed tarball, which is a common way to release the software to be later packaged for distribution. 2.7.1. Putting the bello project into tarball The bello project implements Hello World in bash . The implementation only contains the bello shell script, so the resulting tar.gz archive will have only one file apart from the LICENSE file. This procedure shows how to prepare the bello project for distribution. Prerequisites Considering that this is version 0.1 of the program. Procedure Put all required files into a single directory: Create the archive for distribution and move it to the ~/rpmbuild/SOURCES/ directory, which is the default directory where the rpmbuild command stores the files for building packages: For more information about the example source code written in bash, see Section 2.1.1.1, "Hello World written in bash" . 2.7.2. Putting the pello project into tarball The pello project implements Hello World in Python . The implementation only contains the pello.py program, so the resulting tar.gz archive will have only one file apart from the LICENSE file. This procedure shows how to prepare the pello project for distribution. Prerequisites Considering that this is version 0.1.1 of the program. Procedure Put all required files into a single directory: Create the archive for distribution and move it to the ~/rpmbuild/SOURCES/ directory, which is the default directory where the rpmbuild command stores the files for building packages: For more information about the example source code written in Python, see Section 2.1.1.2, "Hello World written in Python" . 2.7.3. Putting the cello project into tarball The cello project implements Hello World in C. The implementation only contains the cello.c and the Makefile files, so the resulting tar.gz archive will have two files apart from the LICENSE file. Note The patch file is not distributed in the archive with the program. The RPM Packager applies the patch when the RPM is built. The patch will be placed into the ~/rpmbuild/SOURCES/ directory alongside the .tar.gz archive. This procedure shows how to prepare the cello project for distribution. Prerequisites Considering that this is version 1.0 of the program. Procedure Put all required files into a single directory: Create the archive for distribution and move it to the ~/rpmbuild/SOURCES/ directory, which is the default directory where the rpmbuild command stores the files for building packages: Add the patch: For more information about the example source code written in C, see Section 2.1.1.3, "Hello World written in C" .
|
[
"#!/bin/bash printf \"Hello World\\n\"",
"#!/usr/bin/python3 print(\"Hello World\")",
"#include <stdio.h> int main(void) { printf(\"Hello World\\n\"); return 0; }",
"#include <stdio.h> int main(void) { printf(\"Hello World\\n\"); return 0; }",
"gcc -g -o cello cello.c",
"./cello Hello World",
"cello: gcc -g -o cello cello.c clean: rm cello",
"make make: 'cello' is up to date.",
"make clean rm cello make gcc -g -o cello cello.c",
"make make: 'cello' is up to date.",
"./cello Hello World",
"#!/usr/bin/python3 print(\"Hello World\")",
"python -m compileall pello.py file pello.pyc pello.pyc: python 2.7 byte-compiled",
"python pello.pyc Hello World",
"#!/bin/bash printf \"Hello World\\n\"",
"chmod +x bello ./bello Hello World",
"cp -p cello.c cello.c.orig",
"#include <stdio.h> int main(void) { printf(\"Hello World from my very first patch!\\n\"); return 0; }",
"diff -Naur cello.c.orig cello.c --- cello.c.orig 2016-05-26 17:21:30.478523360 -0500 + cello.c 2016-05-27 14:53:20.668588245 -0500 @@ -1,6 +1,6 @@ #include<stdio.h> int main(void){ - printf(\"Hello World!\\n\"); + printf(\"Hello World from my very first patch!\\n\"); return 0; } \\ No newline at end of file",
"diff -Naur cello.c.orig cello.c > cello-output-first-patch.patch",
"cp cello.c.orig cello.c",
"patch < cello-output-first-patch.patch patching file cello.c",
"cat cello.c #include<stdio.h> int main(void){ printf(\"Hello World from my very first patch!\\n\"); return 1; }",
"make clean rm cello make gcc -g -o cello cello.c ./cello Hello World from my very first patch!",
"sudo install -m 0755 bello /usr/bin/bello",
"cd ~ bello Hello World",
"cello: gcc -g -o cello cello.c clean: rm cello install: mkdir -p USD(DESTDIR)/usr/bin install -m 0755 cello USD(DESTDIR)/usr/bin/cello",
"make gcc -g -o cello cello.c sudo make install install -m 0755 cello /usr/bin/cello",
"cd ~ cello Hello World",
"cat /tmp/LICENSE This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/ .",
"mkdir /tmp/bello-0.1 mv ~/bello /tmp/bello-0.1/ cp /tmp/LICENSE /tmp/bello-0.1/",
"cd /tmp/ tar -cvzf bello-0.1.tar.gz bello-0.1 bello-0.1/ bello-0.1/LICENSE bello-0.1/bello mv /tmp/bello-0.1.tar.gz ~/rpmbuild/SOURCES/",
"mkdir /tmp/pello-0.1.2 mv ~/pello.py /tmp/pello-0.1.2/ cp /tmp/LICENSE /tmp/pello-0.1.2/",
"cd /tmp/ tar -cvzf pello-0.1.2.tar.gz pello-0.1.2 pello-0.1.2/ pello-0.1.2/LICENSE pello-0.1.2/pello.py mv /tmp/pello-0.1.2.tar.gz ~/rpmbuild/SOURCES/",
"mkdir /tmp/cello-1.0 mv ~/cello.c /tmp/cello-1.0/ mv ~/Makefile /tmp/cello-1.0/ cp /tmp/LICENSE /tmp/cello-1.0/",
"cd /tmp/ tar -cvzf cello-1.0.tar.gz cello-1.0 cello-1.0/ cello-1.0/Makefile cello-1.0/cello.c cello-1.0/LICENSE mv /tmp/cello-1.0.tar.gz ~/rpmbuild/SOURCES/",
"mv ~/cello-output-first-patch.patch ~/rpmbuild/SOURCES/"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/rpm_packaging_guide/preparing-software-for-rpm-packaging
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/package_manifest/making-open-source-more-inclusive
|
Chapter 4. Using AMQ Management Console
|
Chapter 4. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 4.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox and Chrome. For more information on supported browser versions, see AMQ 7 Supported Configurations . 4.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web bind="http://localhost:8161" path="web"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web bind="http://0.0.0.0:8161" path="web"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker_instance_dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker_instance_dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 4.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 4.4.3, "Securing network access to AMQ Management Console" . 4.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 4.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. Figure 4.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 4.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 4.4.1. Securing AMQ Management Console using Red Hat Single Sign-On Prerequisites Red Hat Single Sign-On 7.4 Procedure Configure Red Hat Single Sign-On: Navigate to the realm in Red Hat Single Sign-On that you want to use for securing AMQ Management Console. Each realm in Red Hat Single Sign-On includes a client named Broker . This client is not related to AMQ. Create a new client in Red Hat Single Sign-On, for example artemis-console . Navigate to the client settings page and set: Valid Redirect URIs to the AMQ Management Console URL followed by * , for example: Web Origins to the same value as Valid Redirect URIs . Red Hat Single Sign-On allows you enter + , indicating that allowed CORS origins includes the value for Valid Redirect URIs . Create a role for the client, for example guest . Make sure all users who require access to AMQ Management Console are assigned the above role, for example, using Red Hat Single Sign-On groups. Configure the AMQ Broker instance: Add the following to your <broker-instance-dir> /instances/broker0/etc/login.config file to configure AMQ Management Console to use Red Hat Single Sign-On: Adding this configuration sets up a JAAS principal and a requirement for a bearer token from Red Hat Single Sign-On. The connection to Red Hat Single Sign-On is defined in the keycloak-bearer-token.json file, as described in the step. Create a file <broker-instance-dir> /etc/keycloak-bearer-token.json with the following contents to specify the connection to Red Hat Single Sign-On used for the bearer token exchange: { "realm": " <realm-name> ", "resource": " <client-name> ", "auth-server-url": " <RHSSO-URL> /auth", "principal-attribute": "preferred_username", "use-resource-role-mappings": true, "ssl-required": "external", "confidential-port": 0 } <realm-name> the name of the realm in Red Hat Single Sign-On <client-name> the name of the client in Red Hat Single Sign-On <RHSSO-URL> the URL of Red Hat Single Sign-On Create a file <broker-instance-dir> /etc/keycloak-js-token.json with the following contents to specify the Red Hat Single Sign-On authentication endpoint: { "realm": "<realm-name>", "clientId": "<client-name>", "url": " <RHSSO-URL> /auth" } Configure the security settings by editing the the <broker-instance-dir> /etc/bootstrap.xml file. For example, to allow users with the amq role consume messages and allow users with the guest role send messages, add the following: <security-setting match="Info"> <permission roles="amq" type="createDurableQueue"/> <permission roles="amq" type="deleteDurableQueue"/> <permission roles="amq" type="createNonDurableQueue"/> <permission roles="amq" type="deleteNonDurableQueue"/> <permission roles="guest" type="send"/> <permission roles="amq" type="consume"/> </security-setting> Run the AMQ Broker instance and validate AMQ Management Console configuration. 4.4.2. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 4.4.3. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker_instance_dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web bind="https://0.0.0.0:8161" path="web" keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> ... </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath=" <broker_instance_dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 4.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 4.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 4.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 4.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 4.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 4.4. Broker diagram tab, including attributes 4.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 4.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 4.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 4.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 4.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 4.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 4.7. Send Message page If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 4.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 4.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 4.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 4.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 4.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 4.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 4.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 4.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 4.11. Send Message page for a queue If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 4.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 4.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 4.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button.
|
[
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </web>",
"<web bind=\"http://0.0.0.0:8161\" path=\"web\">",
"<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>",
"-Dhawtio.disableProxy=false",
"-Dhawtio.proxyWhitelist=192.168.0.51",
"http://192.168.0.49/console/jolokia",
"https://broker.example.com:8161/console/*",
"console { org.keycloak.adapters.jaas.BearerTokenLoginModule required keycloak-config-file=\"USD{artemis.instance}/etc/keycloak-bearer-token.json\" role-principal-class=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal ; };",
"{ \"realm\": \" <realm-name> \", \"resource\": \" <client-name> \", \"auth-server-url\": \" <RHSSO-URL> /auth\", \"principal-attribute\": \"preferred_username\", \"use-resource-role-mappings\": true, \"ssl-required\": \"external\", \"confidential-port\": 0 }",
"{ \"realm\": \"<realm-name>\", \"clientId\": \"<client-name>\", \"url\": \" <RHSSO-URL> /auth\" }",
"<security-setting match=\"Info\"> <permission roles=\"amq\" type=\"createDurableQueue\"/> <permission roles=\"amq\" type=\"deleteDurableQueue\"/> <permission roles=\"amq\" type=\"createNonDurableQueue\"/> <permission roles=\"amq\" type=\"deleteNonDurableQueue\"/> <permission roles=\"guest\" type=\"send\"/> <permission roles=\"amq\" type=\"consume\"/> </security-setting>",
"<web bind=\"https://0.0.0.0:8161\" path=\"web\" keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </web>",
"keyStorePath=\" <broker_instance_dir> /etc/keystore.jks\""
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/managing_amq_broker/assembly-using-amq-console-managing
|
Chapter 28. Data Format
|
Chapter 28. Data Format Only producer is supported The Dataformat component allows to use the Data Format as a Camel Component. 28.1. Dependencies When using dataformat with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataformat-starter</artifactId> </dependency> 28.2. URI format Where name is the name of the Data Format. And then followed by the operation which must either be marshal or unmarshal . The options is used for configuring the Data Format in use. See the Data Format documentation for which options it support. 28.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 28.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 28.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 28.4. Component Options The Data Format component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 28.5. Endpoint Options The Data Format endpoint is configured using URI syntax: with the following path and query parameters: 28.5.1. Path Parameters (2 parameters) Name Description Default Type name (producer) Required Name of data format. String operation (producer) Required Operation to use either marshal or unmarshal. Enum values: marshal unmarshal String 28.5.2. Query Parameters (1 parameters) Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 28.6. Samples For example to use the JAXB Data Format we can do as follows: from("activemq:My.Queue"). to("dataformat:jaxb:unmarshal?contextPath=com.acme.model"). to("mqseries:Another.Queue"); And in XML DSL you do: <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="activemq:My.Queue"/> <to uri="dataformat:jaxb:unmarshal?contextPath=com.acme.model"/> <to uri="mqseries:Another.Queue"/> </route> </camelContext> 28.7. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.dataformat.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.dataformat.enabled Whether to enable auto configuration of the dataformat component. This is enabled by default. Boolean camel.component.dataformat.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataformat-starter</artifactId> </dependency>",
"dataformat:name:(marshal|unmarshal)[?options]",
"dataformat:name:operation",
"from(\"activemq:My.Queue\"). to(\"dataformat:jaxb:unmarshal?contextPath=com.acme.model\"). to(\"mqseries:Another.Queue\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"activemq:My.Queue\"/> <to uri=\"dataformat:jaxb:unmarshal?contextPath=com.acme.model\"/> <to uri=\"mqseries:Another.Queue\"/> </route> </camelContext>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-dataformat-component-starter
|
Chapter 68. Kubernetes Pods
|
Chapter 68. Kubernetes Pods Since Camel 2.17 Both producer and consumer are supported The Kubernetes Pods component is one of the Kubernetes Components which provides a producer to execute Kubernetes Pods operations and a consumer to consume events related to Pod Objects. 68.1. Dependencies When using kubernetes-pods with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 68.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 68.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 68.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 68.3. Component Options The Kubernetes Pods component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 68.4. Endpoint Options The Kubernetes Pods endpoint is configured using URI syntax: with the following path and query parameters: 68.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 68.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 68.5. Message Headers The Kubernetes Pods component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesPodsLabels (producer) Constant: KUBERNETES_PODS_LABELS The pod labels. Map CamelKubernetesPodName (producer) Constant: KUBERNETES_POD_NAME The pod name. String CamelKubernetesPodSpec (producer) Constant: KUBERNETES_POD_SPEC The spec for a pod. PodSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 68.6. Supported producer operation listPods listPodsByLabels getPod createPod updatePod deletePod 68.7. Kubernetes Pods Producer Examples listPods: this operation list the pods on a kubernetes cluster. from("direct:list"). toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods"). to("mock:result"); This operation returns a List of Pods from your cluster. listPodsByLabels: this operation list the pods by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); } }); toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels"). to("mock:result"); This operation returns a List of Pods from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 68.8. Kubernetes Pods Consumer Example fromF("kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Pod pod = exchange.getIn().getBody(Pod.class); log.info("Got event with configmap name: " + pod.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the pod test. 68.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-pods:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); } }); toF(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels\"). to(\"mock:result\");",
"fromF(\"kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Pod pod = exchange.getIn().getBody(Pod.class); log.info(\"Got event with configmap name: \" + pod.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-pods-component-starter
|
Chapter 1. Integrating an overcloud with Ceph Storage
|
Chapter 1. Integrating an overcloud with Ceph Storage Red Hat OpenStack Platform director creates a cloud environment called the overcloud. You can use director to configure extra features for an overcloud, such as integration with Red Hat Ceph Storage. You can integrate your overcloud with Ceph Storage clusters created with director or with existing Ceph Storage clusters. The default integration with Ceph configures the Image service (glance), the Block Storage service (cinder), and the Compute service (nova) to use block storage over the Rados Block Device (RBD) protocol. Additional integration options for File and Object storage might also be included. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . 1.1. Deploying the Shared File Systems service with external CephFS You can deploy the Shared File Systems service (manila) with CephFS by using Red Hat OpenStack Platform (RHOSP) director. You can use the Shared File Systems service with the NFS protocol or the native CephFS protocol. Important You cannot use the Shared File Systems service with the CephFS native driver to serve shares to Red Hat OpenShift Container Platform through Manila CSI. Red Hat does not support this type of deployment. For more information, contact Red Hat Support. The Shared File Systems service with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For more information about CSI workload recommendations, see https://access.redhat.com/articles/6667651 . To use native CephFS shared file systems, clients require access to the Ceph public network. When you integrate an overcloud with an existing Ceph Storage cluster, director does not create an isolated storage network to designate as the Ceph public network. This network is assumed to already exist. Do not provide direct access to the Ceph public network; instead, allow tenants to create a router to connect to the Ceph public network. NFS-Ganesha gateway When you use CephFS through the NFS protocol, director deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability by using an active-passive configuration. The NFS-Ganesha gateway is supported with Red Hat Ceph Storage 4.x (Ceph package 14.x) and Red Hat Ceph Storage 5.x (Ceph package 16.x). For information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Prerequisites Before you configure the Shared File Systems service with an external Ceph Storage cluster, complete the following prerequisites: Verify that your external Ceph Storage cluster has an active Metadata Server (MDS): The external Ceph Storage cluster must have a CephFS file system that is supported by CephFS data and metadata pools. Verify the pools in the CephFS file system: Note the names of these pools to configure the director parameters, ManilaCephFSDataPoolName and ManilaCephFSMetadataPoolName . For more information about this configuration, see Creating a custom environment file . The external Ceph Storage cluster must have a cephx client name and key for the Shared File Systems service. Verify the keyring: Replace <client name> with your cephx client name. 1.2. Configuring Ceph Object Store to use external Ceph Object Gateway Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone). For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide .
|
[
"ceph -s",
"ceph fs ls",
"ceph auth get client.<client name>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_storage_cluster/assembly-integrating-with-ceph-storage_existing-ceph
|
2.2.3.2. Qt Creator
|
2.2.3.2. Qt Creator Qt Creator is a cross-platform IDE tailored to the requirements of Qt developers. It includes the following graphical tools: An advanced C++ code editor Integrated GUI layout and forms designer Project and build management tools Integrated, context-sensitive help system Visual debugger Rapid code navigation tools
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/qt-creator
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.