title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
2.5. NetworkManager Tools
2.5. NetworkManager Tools Table 2.1. A Summary of NetworkManager Tools and Applications Application or Tool Description nmcli A command-line tool which enables users and scripts to interact with NetworkManager . Note that nmcli can be used on systems without a GUI such as servers to control all aspects of NetworkManager . It has the same functionality as GUI tools. nmtui A simple curses-based text user interface (TUI) for NetworkManager nm-connection-editor A graphical user interface tool for certain tasks not yet handled by the control-center utility such as configuring bonds and teaming connections. You can add, remove, and modify network connections stored by NetworkManager . To start it, enter nm-connection-editor in a terminal: control-center A graphical user interface tool provided by the GNOME Shell, available for desktop users. It incorporates a Network settings tool. To start it, press the Super key to enter the Activities Overview, type Network and then press Enter . The Network settings tool appears. network connection icon A graphical user interface tool provided by the GNOME Shell representing network connection states as reported by NetworkManager . The icon has multiple states that serve as visual indicators for the type of connection you are currently using.
[ "~]USD nm-connection-editor" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-NetworkManager_Tools
1.2. Virtualization Solutions
1.2. Virtualization Solutions Red Hat offers the following major virtualization solutions, each with a different user focus and features: Red Hat Enterprise Linux The ability to create, run, and manage virtual machines, as well as a number of virtualization tools and features are included in Red Hat Enterprise Linux 7. This solution supports a limited number of running guests per host, as well as a limited range of guest types. As such, virtualization on Red Hat Enterprise Linux can be useful for example to developers who require testing in multiple environments, or to small businesses running several servers that do not have strict uptime requirements or service-level agreements (SLAs). Important This guide provides information about virtualization on Red Hat Enterprise Linux and does not go into detail about other virtualization solutions. Red Hat Virtualization Red Hat Virtualization (RHV) is based on the Kernel-based Virtual Machine ( KVM ) technology like virtualization on Red Hat Enterprise Linux is, but offers an enhanced array of features. Designed for enterprise-class scalability and performance, it enables management of your entire virtual infrastructure, including hosts, virtual machines, networks, storage, and users from a centralized graphical interface. Note For more information about the differences between virtualization in Red Hat Enterprise Linux and Red Hat Virtualization, see the Red Hat Customer Portal. Red Hat Virtualization can be used by enterprises running larger deployments or mission-critical applications. Examples of large deployments suited to Red Hat Virtualization include databases, trading platforms, and messaging systems that must run continuously without any downtime. Note For more information about Red Hat Virtualization, or to download a fully supported 60-day evaluation version, see http://www.redhat.com/en/technologies/virtualization/enterprise-virtualization . Alternatively, see the Red Hat Virtualization documentation suite . Red Hat OpenStack Platform Red Hat OpenStack Platform offers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud. Note For more information about Red Hat OpenStack Platform, or to download a 60-day evaluation version, see https://www.redhat.com/en/technologies/linux-platforms/openstack-platform . Alternatively, see the Red Hat OpenStack Platform documentation suite.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/virtualization_solutions
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/making-open-source-more-inclusive
14.3.3.3. Backup Domain Controller (BDC) using LDAP
14.3.3.3. Backup Domain Controller (BDC) using LDAP A BDC is an integral part of any enterprise Samba/LDAP solution. The smb.conf files between the PDC and BDC are virtually identical except for the domain master directive. Make sure the PDC has a value of Yes and the BDC has a value of No . If you have multiple BDCs for a PDC, the os level directive is useful in setting the BDC election priority. The higher the value, the higher the server priority for connecting clients. Note A BDC can either use the LDAP database of the PDC or have its own LDAP database. This example uses the LDAP database of the PDC as seen in the passdb backend directive.
[ "[global] workgroup = DOCS netbios name = DOCS_SRV2 passdb backend = ldapsam:ldap://ldap.example.com username map = /etc/samba/smbusers security = user add user script = /usr/sbin/useradd -m %u delete user script = /usr/sbin/userdel -r %u add group script = /usr/sbin/groupadd %g delete group script = /usr/sbin/groupdel %g add user to group script = /usr/sbin/usermod -G %g %u add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null -g machines %u The following specifies the default logon script Per user logon scripts can be specified in the user account using pdbedit logon script = scripts\\logon.bat This sets the default profile path. Set per user paths with pdbedit logon path = \\\\%L\\Profiles\\%U logon drive = H: logon home = \\\\%L\\%U domain logons = Yes os level = 35 preferred master = Yes domain master = No ldap suffix = dc=example,dc=com ldap machine suffix = ou=People ldap user suffix = ou=People ldap group suffix = ou=Group ldap idmap suffix = ou=People ldap admin dn = cn=Manager ldap ssl = no ldap passwd sync = yes idmap uid = 15000-20000 idmap gid = 15000-20000 Other resource shares" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/samba-BDC-LDAP
Chapter 23. Load balancing with MetalLB
Chapter 23. Load balancing with MetalLB 23.1. About MetalLB and the MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add a fault-tolerant external IP address for the service. The external IP address is added to the host network for your cluster. 23.1.1. When to use MetalLB Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address. You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster. After deploying MetalLB with the MetalLB Operator, when you add a service of type LoadBalancer , MetalLB provides a platform-native load balancer. 23.1.2. MetalLB Operator custom resources The MetalLB Operator monitors its own namespace for two custom resources: MetalLB When you add a MetalLB custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster. The Operator only supports a single instance of the custom resource. If the instance is deleted, the Operator removes MetalLB from the cluster. AddressPool MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type LoadBalancer . When you add an AddressPool custom resource to the cluster, the MetalLB Operator configures MetalLB so that it can assign IP addresses from the pool. An address pool includes a list of IP addresses. The list can be a single IP address that is set using a range, such as 1.1.1.1-1.1.1.1, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three. An address pool requires a name. The documentation uses names like doc-example , doc-example-reserved , and doc-example-ipv6 . An address pool specifies whether MetalLB can automatically assign IP addresses from the pool or whether the IP addresses are reserved for services that explicitly specify the pool by name. After you add the MetalLB custom resource to the cluster and the Operator deploys MetalLB, the MetalLB software components, controller and speaker , begin running. 23.1.3. MetalLB software components When you install the MetalLB Operator, the metallb-operator-controller-manager deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to the MetalLB custom resource and AddressPool custom resources. When the Operator starts an instance of MetalLB, it starts a controller deployment and a speaker daemon set. controller The Operator starts the deployment and a single pod. When you add a service of type LoadBalancer , Kubernetes uses the controller to allocate an IP address from an address pool. In case of a service failure, verify you have the following entry in your controller pod logs: Example output "event":"ipAllocated","ip":"172.22.0.201","msg":"IP address assigned by controller speaker The Operator starts a daemon set with one speaker pod for each node in your cluster. If the controller allocated the IP address to the service and service is still unavailable, read the speaker pod logs. If the speaker pod is unavailable, run the oc describe pod -n command. For layer 2 mode, after the controller allocates an IP address for the service, each speaker pod determines if it is on the same node as an endpoint for the service. An algorithm that involves hashing the node name and the service name is used to select a single speaker pod to announce the load balancer IP address. The speaker uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses. Requests for the load balancer IP address are routed to the node with the speaker that announces the IP address. After the node receives the packets, the service proxy routes the packets to an endpoint for the service. The endpoint can be on the same node in the optimal case, or it can be on another node. The service proxy chooses an endpoint each time a connection is established. 23.1.4. MetalLB concepts for layer 2 mode In layer 2 mode, the speaker pod on one node announces the external IP address for a service to the host network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface. Note Since layer 2 mode relies on ARP and NDP, the client must be on the same subnet of the nodes announcing the service in order for MetalLB to work. Additionally, the IP address assigned to the service must be on the same subnet of the network used by the client to reach the service. The speaker pod responds to ARP requests for IPv4 services and NDP requests for IPv6. In layer 2 mode, all traffic for a service IP address is routed through one node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service. Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2. Rather, MetalLB implements a failover mechanism for layer 2 so that when a speaker pod becomes unavailable, a speaker pod on a different node can announce the service IP address. When a node becomes unavailable, failover is automatic. The speaker pods on the other nodes detect that a node is unavailable and a new speaker pod and node take ownership of the service IP address from the failed node. The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has a cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 192.168.100.200 . Nodes 1 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. The speaker pod on node 1 uses ARP to announce the external IP address for the service, 192.168.100.200 . The speaker pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the Ready condition. Client traffic is routed to the host network and connects to the 192.168.100.200 IP address. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If node 1 becomes unavailable, the external IP address fails over to another node. On another node that has an instance of the application pod and service endpoint, the speaker pod begins to announce the external IP address, 192.168.100.200 and the new node receives the client traffic. In the diagram, the only candidate is node 3. 23.1.4.1. Layer 2 and external traffic policy With layer 2 mode, one node in your cluster receives all the traffic for the service IP address. How your cluster handles the traffic after it enters the node is affected by the external traffic policy. cluster This is the default value for spec.externalTrafficPolicy . With the cluster traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service. This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client. local With the local traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node. For example, if the speaker pod on node A announces the external service IP, then all traffic is sent to node A. After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A. Pods for the service that are on additional nodes do not receive any traffic from node A. Pods for the service on additional nodes act as replicas in case failover is needed. This policy does not affect the client IP address. Application pods can determine the client IP address from the incoming connections. 23.1.5. Limitations and restrictions 23.1.5.1. Support for layer 2 only When you install and configure MetalLB on OpenShift Container Platform 4.9 with the MetalLB Operator, support is restricted to layer 2 mode only. In comparison, the open source MetalLB project offers load balancing for layer 2 mode and a mode for layer 3 that uses border gateway protocol (BGP). 23.1.5.2. Support for single stack networking Although you can specify IPv4 addresses and IPv6 addresses in the same address pool, MetalLB only assigns one IP address for the load balancer. When MetalLB is deployed on a cluster that is configured for dual-stack networking, MetalLB assigns one IPv4 or IPv6 address for the load balancer, depending on the IP address family of the cluster IP for the service. For example, if the cluster IP of the service is IPv4, then MetalLB assigns an IPv4 address for the load balancer. MetalLB does not assign an IPv4 and an IPv6 address simultaneously. IPv6 is only supported for clusters that use the OVN-Kubernetes network provider. 23.1.5.3. Infrastructure considerations for MetalLB MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability. In addition to bare metal installations, installations of OpenShift Container Platform on some infrastructures might not include a native load-balancer capability. For example, the following infrastructures can benefit from adding the MetalLB Operator: Bare metal VMware vSphere MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers. 23.1.5.4. Limitations for layer 2 mode 23.1.5.4.1. Single-node bottleneck MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance. Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node. This is a fundamental limitation of using ARP and NDP to direct traffic. 23.1.5.4.2. Slow failover performance Failover between nodes depends on cooperation from the clients. When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed. Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly. When clients update their caches quickly, failover completes within a few seconds. Clients typically fail over to a new node within 10 seconds. However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update. Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly. Issues with slow failover are not expected except for older and less common client operating systems. To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership. The old node can continue to forward traffic for outdated clients until their caches refresh. During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries. 23.1.5.5. Incompatibility with IP failover MetalLB is incompatible with the IP failover feature. Before you install the MetalLB Operator, remove IP failover. 23.1.6. Additional resources Comparison: Fault tolerant access to external IP addresses Removing IP failover 23.2. Installing the MetalLB Operator As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster. The installation procedures use the metallb-system namespace. You can install the Operator and configure custom resources in a different namespace. The Operator starts MetalLB in the same namespace that the Operator is installed in. MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator. 23.2.1. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type metallb to find the MetalLB Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 23.2.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Confirm that the MetalLB Operator is available: USD oc get packagemanifests -n openshift-marketplace metallb-operator Example output NAME CATALOG AGE metallb-operator Red Hat Operators 9h Create the metallb-system namespace: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF Create an Operator group custom resource in the namespace: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system spec: targetNamespaces: - metallb-system EOF Confirm the Operator group is installed in the namespace: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-operator 14m Subscribe to the MetalLB Operator. Run the following command to get the OpenShift Container Platform major and minor version. You use the values to set the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) To create a subscription custom resource for the Operator, enter the following command: USD cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: "USD{OC_VERSION}" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Confirm the install plan is in the namespace: USD oc get installplan -n metallb-system Example output NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.9.0-nnnnnnnnnnnn Automatic true To verify that the Operator is installed, enter the following command: USD oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase metallb-operator.4.9.0-nnnnnnnnnnnn Succeeded 23.2.3. Starting MetalLB on your cluster After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the MetalLB Operator. Procedure Create a single instance of a MetalLB custom resource: USD cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF Verification Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running. Check that the deployment for the controller is running: USD oc get deployment -n metallb-system controller Example output NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m Check that the daemon set for the speaker is running: USD oc get daemonset -n metallb-system speaker Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster. 23.2.4. steps Configuring MetalLB address pools 23.3. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. 23.3.1. About the address pool custom resource The fields for the address pool custom resource are described in the following table. Table 23.1. MetalLB address pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.universe.tf/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. spec.protocol string Specifies the protocol for announcing the load balancer IP address to peer nodes. The only supported value is layer2 . spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.universe.tf/address-pool annotation. The default value is true . spec.addresses array Specifies a list of IP addresses for MetalLB to assign to services. You can specify multiple ranges in a single pool. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. 23.3.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetaLLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as addresspool.yaml , with content like the following example: apiVersion: metallb.io/v1alpha1 kind: AddressPool metadata: namespace: metallb-system name: doc-example spec: protocol: layer2 addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 Apply the configuration for the address pool: USD oc apply -f addresspool.yaml Verification View the address pool: USD oc describe -n metallb-system addresspool doc-example Example output Name: doc-example Namespace: metallb-system Labels: <none> Annotations: <none> API Version: metallb.io/v1alpha1 Kind: AddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Protocol: layer2 Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 23.3.3. Example address pool configurations 23.3.3.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: protocol: layer2 addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 23.3.3.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: protocol: layer2 addresses: - 10.0.100.0/28 autoAssign: false 23.3.3.3. Example: IPv6 address pool You can add address pools that use IPv6. The following example shows a single IPv6 range. However, you can specify multiple ranges in the addresses list, just like several IPv4 examples. apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-ipv6 namespace: metallb-system spec: protocol: layer2 addresses: - 2002:2:2::1-2002:2:2::100 23.3.4. steps Configuring services to use MetalLB 23.4. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 23.4.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 23.4.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.universe.tf/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 23.4.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 23.4.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8 1 5 Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key . 2 6 Specify different port numbers for the services. 3 7 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 8 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 23.4.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output <.> The annotation is present if you request an IP address from a specific pool. <.> The service type must indicate LoadBalancer . <.> The load-balancer ingress field indicates the external IP address if the service is assigned correctly. <.> The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error.
[ "\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller", "oc get packagemanifests -n openshift-marketplace metallb-operator", "NAME CATALOG AGE metallb-operator Red Hat Operators 9h", "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system spec: targetNamespaces: - metallb-system EOF", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-operator 14m", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: \"USD{OC_VERSION}\" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get installplan -n metallb-system", "NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.9.0-nnnnnnnnnnnn Automatic true", "oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase metallb-operator.4.9.0-nnnnnnnnnnnn Succeeded", "cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF", "oc get deployment -n metallb-system controller", "NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m", "oc get daemonset -n metallb-system speaker", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m", "apiVersion: metallb.io/v1alpha1 kind: AddressPool metadata: namespace: metallb-system name: doc-example spec: protocol: layer2 addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75", "oc apply -f addresspool.yaml", "oc describe -n metallb-system addresspool doc-example", "Name: doc-example Namespace: metallb-system Labels: <none> Annotations: <none> API Version: metallb.io/v1alpha1 Kind: AddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Protocol: layer2 Events: <none>", "apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: protocol: layer2 addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5", "apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: protocol: layer2 addresses: - 10.0.100.0/28 autoAssign: false", "apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-ipv6 namespace: metallb-system spec: protocol: layer2 addresses: - 2002:2:2::1-2002:2:2::100", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8", "oc apply -f <service_name>.yaml", "service/<service_name> created", "oc describe service <service_name>", "Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example <.> Selector: app=service_name Type: LoadBalancer <.> IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 <.> Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: <.> Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/load-balancing-with-metallb
14.5.7. Modifying the Link State of a Domain's Virtual Interface
14.5.7. Modifying the Link State of a Domain's Virtual Interface The following command can either configure a specified interface as up or down: Using this modifies the status of the specified interface for the specified domain. Note that if you only want the persistent configuration of the domain to be modified, you need to use the --config option. It should also be noted that for compatibility reasons, --persistent is an alias of --config . The "interface device" can be the interface's target name or the MAC address.
[ "domif-setlink [domain] [interface-device] [state] { --config }", "domif-setlink rhel6 eth0 up" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-modifying_the_link_state_of_a_domains_virtual_interface
33.10. Pre-Installation Script
33.10. Pre-Installation Script Figure 33.13. Pre-Installation Script You can add commands to run on the system immediately after the kickstart file has been parsed and before the installation begins. If you have configured the network in the kickstart file, the network is enabled before this section is processed. To include a pre-installation script, type it in the text area. Important The version of anaconda in releases of Red Hat Enterprise Linux included a version of busybox that provided shell commands in the pre-installation and post-installation environments. The version of anaconda in Red Hat Enterprise Linux 6 no longer includes busybox , and uses GNU bash commands instead. Refer to Appendix G, Alternatives to busybox commands for more information. To specify a scripting language to use to execute the script, select the Use an interpreter option and enter the interpreter in the text box beside it. For example, /usr/bin/python2.6 can be specified for a Python script. This option corresponds to using %pre --interpreter /usr/bin/python2.6 in your kickstart file. Only the most commonly used commands are available in the pre-installation environment. See Section 32.6, "Pre-installation Script" for a complete list. Important Do not include the %pre command. It is added for you. Note The pre-installation script is run after the source media is mounted and stage 2 of the bootloader has been loaded. For this reason it is not possible to change the source media in the pre-installation script.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-redhat-config-kickstart-prescript
Appendix A. Glossary
Appendix A. Glossary A.1. A access control The process of controlling what particular users are allowed to do. For example, access control to servers is typically based on an identity, established by a password or a certificate, and on rules regarding what that entity can do. See also ???TITLE??? . access control instructions (ACI) An access rule that specifies how subjects requesting access are to be identified or what rights are allowed or denied for a particular subject. See ???TITLE??? . access control list (ACL) A collection of access control entries that define a hierarchy of access rules to be evaluated when a server receives a request for access to a particular resource. See ???TITLE??? . administrator The person who installs and configures one or more Certificate System managers and sets up privileged users, or agents, for them. See also ???TITLE??? . Advanced Encryption Standard (AES) The Advanced Encryption Standard (AES), like its predecessor Data Encryption Standard (DES), is a FIPS-approved symmetric-key encryption standard. AES was adopted by the US government in 2002. It defines three block ciphers, AES-128, AES-192 and AES-256. The National Institute of Standards and Technology (NIST) defined the AES standard in U.S. FIPS PUB 197. For more information, see http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf . agent A user who belongs to a group authorized to manage ???TITLE??? for a Certificate System manager. See also ???TITLE??? , ???TITLE??? . agent-approved enrollment An enrollment that requires an agent to approve the request before the certificate is issued. agent services Services that can be administered by a Certificate System ???TITLE??? through HTML pages served by the Certificate System subsystem for which the agent has been assigned the necessary privileges. The HTML pages for administering such services. APDU Application protocol data unit. A communication unit (analogous to a byte) that is used in communications between a smart card and a smart card reader. attribute value assertion (AVA) An assertion of the form attribute = value , where attribute is a tag, such as o (organization) or uid (user ID), and value is a value such as "Red Hat, Inc." or a login name. AVAs are used to form the ???TITLE??? that identifies the subject of a certificate, called the ???TITLE??? of the certificate. audit log A log that records various system events. This log can be signed, providing proof that it was not tampered with, and can only be read by an auditor user. auditor A privileged user who can view the signed audit logs. authentication Confident identification; assurance that a party to some computerized transaction is not an impostor. Authentication typically involves the use of a password, certificate, PIN, or other information to validate identity over a computer network. See also ???TITLE??? , ???TITLE??? , ???TITLE??? , ???TITLE??? . authentication module A set of rules (implemented as a JavaTM class) for authenticating an end entity, agent, administrator, or any other entity that needs to interact with a Certificate System subsystem. In the case of typical end-user enrollment, after the user has supplied the information requested by the enrollment form, the enrollment servlet uses an authentication module associated with that form to validate the information and authenticate the user's identity. See ???TITLE??? . authorization Permission to access a resource controlled by a server. Authorization typically takes place after the ACLs associated with a resource have been evaluated by a server. See ???TITLE??? . automated enrollment A way of configuring a Certificate System subsystem that allows automatic authentication for end-entity enrollment, without human intervention. With this form of authentication, a certificate request that completes authentication module processing successfully is automatically approved for profile processing and certificate issuance. A.2. B bind DN A user ID, in the form of a distinguished name (DN), used with a password to authenticate to Red Hat Directory Server. A.3. C CA certificate A certificate that identifies a certificate authority. See also ???TITLE??? , ???TITLE??? , ???TITLE??? . CA hierarchy A hierarchy of CAs in which a root CA delegates the authority to issue certificates to subordinate CAs. Subordinate CAs can also expand the hierarchy by delegating issuing status to other CAs. See also ???TITLE??? , ???TITLE??? , ???TITLE??? . CA server key The SSL server key of the server providing a CA service. CA signing key The private key that corresponds to the public key in the CA certificate. A CA uses its signing key to sign certificates and CRLs. certificate Digital data, formatted according to the X.509 standard, that specifies the name of an individual, company, or other entity (the ???TITLE??? of the certificate) and certifies that a ???TITLE??? , which is also included in the certificate, belongs to that entity. A certificate is issued and digitally signed by a ???TITLE??? . A certificate's validity can be verified by checking the CA's ???TITLE??? through ???TITLE??? techniques. To be trusted within a ???TITLE??? , a certificate must be issued and signed by a CA that is trusted by other entities enrolled in the PKI. certificate authority (CA) A trusted entity that issues a ???TITLE??? after verifying the identity of the person or entity the certificate is intended to identify. A CA also renews and revokes certificates and generates CRLs. The entity named in the issuer field of a certificate is always a CA. Certificate authorities can be independent third parties or a person or organization using certificate-issuing server software, such as Red Hat Certificate System. certificate-based authentication Authentication based on certificates and public-key cryptography. See also ???TITLE??? . certificate chain A hierarchical series of certificates signed by successive certificate authorities. A CA certificate identifies a ???TITLE??? and is used to sign certificates issued by that authority. A CA certificate can in turn be signed by the CA certificate of a parent CA, and so on up to a ???TITLE??? . Certificate System allows any end entity to retrieve all the certificates in a certificate chain. certificate extensions An X.509 v3 certificate contains an extensions field that permits any number of additional fields to be added to the certificate. Certificate extensions provide a way of adding information such as alternative subject names and usage restrictions to certificates. A number of standard extensions have been defined by the PKIX working group. certificate fingerprint A ???TITLE??? associated with a certificate. The number is not part of the certificate itself, but is produced by applying a hash function to the contents of the certificate. If the contents of the certificate changes, even by a single character, the same function produces a different number. Certificate fingerprints can therefore be used to verify that certificates have not been tampered with. Certificate Management Messages over Cryptographic Message Syntax (CMC) Message format used to convey a request for a certificate to a Certificate Manager. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmc-02 . Certificate Management Message Formats (CMMF) Message formats used to convey certificate requests and revocation requests from end entities to a Certificate Manager and to send a variety of information to end entities. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. CMMF is subsumed by another proposed standard, ???TITLE??? . For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmmf-02 . Certificate Manager An independent Certificate System subsystem that acts as a certificate authority. A Certificate Manager instance issues, renews, and revokes certificates, which it can publish along with CRLs to an LDAP directory. It accepts requests from end entities. See ???TITLE??? . Certificate Manager agent A user who belongs to a group authorized to manage agent services for a Certificate Manager. These services include the ability to access and modify (approve and reject) certificate requests and issue certificates. certificate profile A set of configuration settings that defines a certain type of enrollment. The certificate profile sets policies for a particular type of enrollment along with an authentication method in a certificate profile. Certificate Request Message Format (CRMF) Format used for messages related to management of X.509 certificates. This format is a subset of CMMF. See also ???TITLE??? . For detailed information, see https://tools.ietf.org/html/rfc2511 . certificate revocation list (CRL) As defined by the X.509 standard, a list of revoked certificates by serial number, generated and signed by a ???TITLE??? . Certificate System See ???TITLE??? , ???TITLE??? . Certificate System subsystem One of the five Certificate System managers: ???TITLE??? , Online Certificate Status Manager, ???TITLE??? , Token Key Service, or Token Processing System. Certificate System console A console that can be opened for any single Certificate System instance. A Certificate System console allows the Certificate System administrator to control configuration settings for the corresponding Certificate System instance. chain of trust See ???TITLE??? . chained CA See ???TITLE??? . cipher See ???TITLE??? . client authentication The process of identifying a client to a server, such as with a name and password or with a certificate and some digitally signed data. See ???TITLE??? , ???TITLE??? , ???TITLE??? . client SSL certificate A certificate used to identify a client to a server using the SSL protocol. See ???TITLE??? . CMC See ???TITLE??? . CMC Enrollment Features that allow either signed enrollment or signed revocation requests to be sent to a Certificate Manager using an agent's signing certificate. These requests are then automatically processed by the Certificate Manager. CMMF See ???TITLE??? . CRL See ???TITLE??? . cross-pair certificate A certificate issued by one CA to another CA which is then stored by both CAs to form a circle of trust. The two CAs issue certificates to each other, and then store both cross-pair certificates as a certificate pair. CRMF See ???TITLE??? . cross-certification The exchange of certificates by two CAs in different certification hierarchies, or chains. Cross-certification extends the chain of trust so that it encompasses both hierarchies. See also ???TITLE??? . cryptographic algorithm A set of rules or directions used to perform cryptographic operations such as ???TITLE??? and ???TITLE??? . Cryptographic Message Syntax (CS) The syntax used to digitally sign, digest, authenticate, or encrypt arbitrary messages, such as CMMF. cryptographic module See ???TITLE??? . cryptographic service provider (CSP) A cryptographic module that performs cryptographic services, such as key generation, key storage, and encryption, on behalf of software that uses a standard interface such as that defined by PKCS #11 to request such services. CSP See ???TITLE??? . A.4. D Key Recovery Authority An optional, independent Certificate System subsystem that manages the long-term archival and recovery of RSA encryption keys for end entities. A Certificate Manager can be configured to archive end entities' encryption keys with a Key Recovery Authority before issuing new certificates. The Key Recovery Authority is useful only if end entities are encrypting data, such as sensitive email, that the organization may need to recover someday. It can be used only with end entities that support dual key pairs: two separate key pairs, one for encryption and one for digital signatures. Key Recovery Authority agent A user who belongs to a group authorized to manage agent services for a Key Recovery Authority, including managing the request queue and authorizing recovery operation using HTML-based administration pages. Key Recovery Authority recovery agent One of the m of n people who own portions of the storage key for the ???TITLE??? . Key Recovery Authority storage key Special key used by the Key Recovery Authority to encrypt the end entity's encryption key after it has been decrypted with the Key Recovery Authority's private transport key. The storage key never leaves the Key Recovery Authority. Key Recovery Authority transport certificate Certifies the public key used by an end entity to encrypt the entity's encryption key for transport to the Key Recovery Authority. The Key Recovery Authority uses the private key corresponding to the certified public key to decrypt the end entity's key before encrypting it with the storage key. decryption Unscrambling data that has been encrypted. See ???TITLE??? . delta CRL A CRL containing a list of those certificates that have been revoked since the last full CRL was issued. digital ID See ???TITLE??? . digital signature To create a digital signature, the signing software first creates a ???TITLE??? from the data to be signed, such as a newly issued certificate. The one-way hash is then encrypted with the private key of the signer. The resulting digital signature is unique for each piece of data signed. Even a single comma added to a message changes the digital signature for that message. Successful decryption of the digital signature with the signer's public key and comparison with another hash of the same data provides ???TITLE??? . Verification of the ???TITLE??? for the certificate containing the public key provides authentication of the signer. See also ???TITLE??? , ???TITLE??? . distribution points Used for CRLs to define a set of certificates. Each distribution point is defined by a set of certificates that are issued. A CRL can be created for a particular distribution point. distinguished name (DN) A series of AVAs that identify the subject of a certificate. See ???TITLE??? . dual key pair Two public-private key pairs, four keys altogether, corresponding to two separate certificates. The private key of one pair is used for signing operations, and the public and private keys of the other pair are used for encryption and decryption operations. Each pair corresponds to a separate ???TITLE??? . See also ???TITLE??? , ???TITLE??? , ???TITLE??? . A.5. E eavesdropping Surreptitious interception of information sent over a network by an entity for which the information is not intended. Elliptic Curve Cryptography (ECC) A cryptographic algorithm which uses elliptic curves to create additive logarithms for the mathematical problems which are the basis of the cryptographic keys. ECC ciphers are more efficient to use than RSA ciphers and, because of their intrinsic complexity, are stronger at smaller bits than RSA ciphers. encryption Scrambling information in a way that disguises its meaning. See ???TITLE??? . encryption key A private key used for encryption only. An encryption key and its equivalent public key, plus a ???TITLE??? and its equivalent public key, constitute a ???TITLE??? . enrollment The process of requesting and receiving an X.509 certificate for use in a ???TITLE??? . Also known as registration . end entity In a ???TITLE??? , a person, router, server, or other entity that uses a ???TITLE??? to identify itself. extensions field See ???TITLE??? . A.6. F Federal Bridge Certificate Authority (FBCA) A configuration where two CAs form a circle of trust by issuing cross-pair certificates to each other and storing the two cross-pair certificates as a single certificate pair. fingerprint See ???TITLE??? . FIPS PUBS 140 Federal Information Standards Publications (FIPS PUBS) 140 is a US government standard for implementations of cryptographic modules, hardware or software that encrypts and decrypts data or performs other cryptographic operations, such as creating or verifying digital signatures. Many products sold to the US government must comply with one or more of the FIPS standards. See http://www.nist.gov/itl/fipscurrent.cfm . firewall A system or combination of systems that enforces a boundary between two or more networks. A.7. H Hypertext Transport Protocol (HTTP) and Hypertext Transport Protocol Secure (HTTPS) Protocols used to communicate with web servers. HTTPS consists of communication over HTTP (Hypertext Transfer Protocol) within a connection encrypted by Transport Layer Security (TLS). The main purpose of HTTPS is authentication of the visited website and protection of privacy and integrity of the exchanged data. A.8. I impersonation The act of posing as the intended recipient of information sent over a network. Impersonation can take two forms: ???TITLE??? and ???TITLE??? . input In the context of the certificate profile feature, it defines the enrollment form for a particular certificate profile. Each input is set, which then dynamically creates the enrollment form from all inputs configured for this enrollment. intermediate CA A CA whose certificate is located between the root CA and the issued certificate in a ???TITLE??? . IP spoofing The forgery of client IP addresses. IPv4 and IPv6 Certificate System supports both IPv4 and IPv6 address namespaces for communications and operations with all subsystems and tools, as well as for clients, subsystem creation, and token and certificate enrollment. A.9. J JAR file A digital envelope for a compressed collection of files organized according to the ???TITLE??? . JavaTM archive (JAR) format A set of conventions for associating digital signatures, installer scripts, and other information with files in a directory. JavaTM Cryptography Architecture (JCA) The API specification and reference developed by Sun Microsystems for cryptographic services. See http://java.sun.com/products/jdk/1.2/docs/guide/security/CryptoSpec.Introduction . JavaTM Development Kit (JDK) Software development kit provided by Sun Microsystems for developing applications and applets using the JavaTM programming language. JavaTM Native Interface (JNI) A standard programming interface that provides binary compatibility across different implementations of the JavaTM Virtual Machine (JVM) on a given platform, allowing existing code written in a language such as C or C++ for a single platform to bind to JavaTM. See http://java.sun.com/products/jdk/1.2/docs/guide/jni/index.html . JavaTM Security Services (JSS) A JavaTM interface for controlling security operations performed by Network Security Services (NSS). A.10. K KEA See ???TITLE??? . key A large number used by a ???TITLE??? to encrypt or decrypt data. A person's ???TITLE??? , for example, allows other people to encrypt messages intended for that person. The messages must then be decrypted by using the corresponding ???TITLE??? . key exchange A procedure followed by a client and server to determine the symmetric keys they will both use during an SSL session. Key Exchange Algorithm (KEA) An algorithm used for key exchange by the US Government. KEYGEN tag An HTML tag that generates a key pair for use with a certificate. A.11. L Lightweight Directory Access Protocol (LDAP) A directory service protocol designed to run over TCP/IP and across multiple platforms. LDAP is a simplified version of Directory Access Protocol (DAP), used to access X.500 directories. LDAP is under IETF change control and has evolved to meet Internet requirements. linked CA An internally deployed ???TITLE??? whose certificate is signed by a public, third-party CA. The internal CA acts as the root CA for certificates it issues, and the third- party CA acts as the root CA for certificates issued by other CAs that are linked to the same third-party root CA. Also known as "chained CA" and by other terms used by different public CAs. A.12. M manual authentication A way of configuring a Certificate System subsystem that requires human approval of each certificate request. With this form of authentication, a servlet forwards a certificate request to a request queue after successful authentication module processing. An agent with appropriate privileges must then approve each request individually before profile processing and certificate issuance can proceed. MD5 A message digest algorithm that was developed by Ronald Rivest. See also ???TITLE??? . message digest See ???TITLE??? . misrepresentation The presentation of an entity as a person or organization that it is not. For example, a website might pretend to be a furniture store when it is really a site that takes credit-card payments but never sends any goods. Misrepresentation is one form of ???TITLE??? . See also ???TITLE??? . A.13. N Network Security Services (NSS) A set of libraries designed to support cross-platform development of security-enabled communications applications. Applications built using the NSS libraries support the ???TITLE??? protocol for authentication, tamper detection, and encryption, and the PKCS #11 protocol for cryptographic token interfaces. NSS is also available separately as a software development kit. nonrepudiation The inability by the sender of a message to deny having sent the message. A ???TITLE??? provides one form of nonrepudiation. non-TMS Non-token management system. Refers to a configuration of subsystems (the CA and, optionally, KRA and OCSP) which do not handle smart cards directly. See also ???TITLE??? . A.14. O object signing A method of file signing that allows software developers to sign Java code, JavaScript scripts, or any kind of file and allows users to identify the signers and control access by signed code to local system resources. object-signing certificate A certificate whose associated private key is used to sign objects; related to ???TITLE??? . OCSP Online Certificate Status Protocol. one-way hash A number of fixed-length generated from data of arbitrary length with the aid of a hashing algorithm. The number, also called a message digest, is unique to the hashed data. Any change in the data, even deleting or altering a single character, results in a different value. The content of the hashed data cannot be deduced from the hash. operation The specific operation, such as read or write, that is being allowed or denied in an access control instruction. output In the context of the certificate profile feature, it defines the resulting form from a successful certificate enrollment for a particular certificate profile. Each output is set, which then dynamically creates the form from all outputs configured for this enrollment. A.15. P password-based authentication Confident identification by means of a name and password. See also ???TITLE??? , ???TITLE??? . PKCS #7 The public-key cryptography standard that governs signing and encryption. PKCS #10 The public-key cryptography standard that governs certificate requests. PKCS #11 The public-key cryptography standard that governs cryptographic tokens such as smart cards. PKCS #11 module A driver for a cryptographic device that provides cryptographic services, such as encryption and decryption, through the PKCS #11 interface. A PKCS #11 module, also called a cryptographic module or cryptographic service provider , can be implemented in either hardware or software. A PKCS #11 module always has one or more slots, which may be implemented as physical hardware slots in some form of physical reader, such as for smart cards, or as conceptual slots in software. Each slot for a PKCS #11 module can in turn contain a token, which is the hardware or software device that actually provides cryptographic services and optionally stores certificates and keys. Red Hat provides a built-in PKCS #11 module with Certificate System. PKCS #12 The public-key cryptography standard that governs key portability. private key One of a pair of keys used in public-key cryptography. The private key is kept secret and is used to decrypt data encrypted with the corresponding ???TITLE??? . proof-of-archival (POA) Data signed with the private Key Recovery Authority transport key that contains information about an archived end-entity key, including key serial number, name of the Key Recovery Authority, ???TITLE??? of the corresponding certificate, and date of archival. The signed proof-of-archival data are the response returned by the Key Recovery Authority to the Certificate Manager after a successful key archival operation. See also ???TITLE??? . public key One of a pair of keys used in public-key cryptography. The public key is distributed freely and published as part of a ???TITLE??? . It is typically used to encrypt data sent to the public key's owner, who then decrypts the data with the corresponding ???TITLE??? . public-key cryptography A set of well-established techniques and standards that allow an entity to verify its identity electronically or to sign and encrypt electronic data. Two keys are involved, a public key and a private key. A ???TITLE??? is published as part of a certificate, which associates that key with a particular identity. The corresponding private key is kept secret. Data encrypted with the public key can be decrypted only with the private key. public-key infrastructure (PKI) The standards and services that facilitate the use of public-key cryptography and X.509 v3 certificates in a networked environment. A.16. R RC2, RC4 Cryptographic algorithms developed for RSA Data Security by Rivest. See also ???TITLE??? . Red Hat Certificate System A highly configurable set of software components and tools for creating, deploying, and managing certificates. Certificate System is comprised of five major subsystems that can be installed in different Certificate System instances in different physical locations: ???TITLE??? , Online Certificate Status Manager, ???TITLE??? , Token Key Service, and Token Processing System. registration See ???TITLE??? . root CA The ???TITLE??? with a self-signed certificate at the top of a certificate chain. See also ???TITLE??? , ???TITLE??? . RSA algorithm Short for Rivest-Shamir-Adleman, a public-key algorithm for both encryption and authentication. It was developed by Ronald Rivest, Adi Shamir, and Leonard Adleman and introduced in 1978. RSA key exchange A key-exchange algorithm for SSL based on the RSA algorithm. A.17. S sandbox A JavaTM term for the carefully defined limits within which JavaTM code must operate. Simple Certificate Enrollment Protocol (SCEP) A protocol designed by Cisco to specify a way for a router to communicate with a CA for router certificate enrollment. Certificate System supports SCEP's CA mode of operation, where the request is encrypted with the CA signing certificate. secure channel A security association between the TPS and the smart card which allows encrypted communciation based on a shared master key generated by the TKS and the smart card APDUs. Secure Sockets Layer (SSL) A protocol that allows mutual authentication between a client and server and the establishment of an authenticated and encrypted connection. SSL runs above TCP/IP and below HTTP, LDAP, IMAP, NNTP, and other high-level network protocols. security domain A centralized repository or inventory of PKI subsystems. Its primary purpose is to facilitate the installation and configuration of new PKI services by automatically establishing trusted relationships between subsystems. Security-Enhanced Linux (SELinux) Security-enhanced Linux (SELinux) is a set of security protocols enforcing mandatory access control on Linux system kernels. SELinux was developed by the United States National Security Agency to keep applications from accessing confidential or protected files through lenient or flawed access controls. self tests A feature that tests a Certificate System instance both when the instance starts up and on-demand. server authentication The process of identifying a server to a client. See also ???TITLE??? . server SSL certificate A certificate used to identify a server to a client using the ???TITLE??? protocol. servlet JavaTM code that handles a particular kind of interaction with end entities on behalf of a Certificate System subsystem. For example, certificate enrollment, revocation, and key recovery requests are each handled by separate servlets. SHA Secure Hash Algorithm, a hash function used by the US government. signature algorithm A cryptographic algorithm used to create digital signatures. Certificate System supports the MD5 and ???TITLE??? signing algorithms. See also ???TITLE??? , ???TITLE??? . signed audit log See ???TITLE??? . signing certificate A certificate whose public key corresponds to a private key used to create digital signatures. For example, a Certificate Manager must have a signing certificate whose public key corresponds to the private key it uses to sign the certificates it issues. signing key A private key used for signing only. A signing key and its equivalent public key, plus an ???TITLE??? and its equivalent public key, constitute a ???TITLE??? . single sign-on In Certificate System, a password that simplifies the way to sign on to Red Hat Certificate System by storing the passwords for the internal database and tokens. Each time a user logs on, he is required to enter this single password. The ability for a user to log in once to a single computer and be authenticated automatically by a variety of servers within a network. Partial single sign-on solutions can take many forms, including mechanisms for automatically tracking passwords used with different servers. Certificates support single sign-on within a ???TITLE??? . A user can log in once to a local client's private-key database and, as long as the client software is running, rely on ???TITLE??? to access each server within an organization that the user is allowed to access. slot The portion of a ???TITLE??? , implemented in either hardware or software, that contains a ???TITLE??? . smart card A small device that contains a microprocessor and stores cryptographic information, such as keys and certificates, and performs cryptographic operations. Smart cards implement some or all of the ???TITLE??? interface. spoofing Pretending to be someone else. For example, a person can pretend to have the email address [email protected] or a computer can identify itself as a site called www.redhat.com when it is not. Spoofing is one form of ???TITLE??? . See also ???TITLE??? . SSL See ???TITLE??? . subject The entity identified by a ???TITLE??? . In particular, the subject field of a certificate contains a ???TITLE??? that uniquely describes the certified entity. subject name A ???TITLE??? that uniquely describes the ???TITLE??? of a ???TITLE??? . subordinate CA A certificate authority whose certificate is signed by another subordinate CA or by the root CA. See ???TITLE??? , ???TITLE??? . symmetric encryption An encryption method that uses the same cryptographic key to encrypt and decrypt a given message. A.18. T tamper detection A mechanism ensuring that data received in electronic form entirely corresponds with the original version of the same data. token A hardware or software device that is associated with a ???TITLE??? in a ???TITLE??? . It provides cryptographic services and optionally stores certificates and keys. token key service (TKS) A subsystem in the token management system which derives specific, separate keys for every smart card based on the smart card APDUs and other shared information, like the token CUID. token management system (TMS) The interrelated subsystems - CA, TKS, TPS, and, optionally, the KRA - which are used to manage certificates on smart cards (tokens). transport layer security (TLS) A set of rules governing server authentication, client authentication, and encrypted communication between servers and clients. token processing system (TPS) A subsystem which interacts directly the Enterprise Security Client and smart cards to manage the keys and certificates on those smart cards. tree hierarchy The hierarchical structure of an LDAP directory. trust Confident reliance on a person or other entity. In a ???TITLE??? , trust refers to the relationship between the user of a certificate and the ???TITLE??? that issued the certificate. If a CA is trusted, then valid certificates issued by that CA can be trusted. A.19. U UTF-8 The certificate enrollment pages support all UTF-8 characters for specific fields (common name, organizational unit, requester name, and additional notes). The UTF-8 strings are searchable and correctly display in the CA, OCSP, and KRA end user and agents services pages. However, the UTF-8 support does not extend to internationalized domain names, such as those used in email addresses. A.20. V virtual private network (VPN) A way of connecting geographically distant divisions of an enterprise. The VPN allows the divisions to communicate over an encrypted channel, allowing authenticated, confidential transactions that would normally be restricted to a private network. A.21. X X.509 version 1 and version 3 Digital certificate formats recommended by the International Telecommunications Union (ITU).
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/glossary
Chapter 10. Kafka Streams API overview
Chapter 10. Kafka Streams API overview The Kafka Streams API allows applications to receive data from one or more input streams, execute complex operations like mapping, filtering or joining, and write the results into one or more output streams. It is part of the kafka-streams JAR package that is available in the Red Hat Maven repository. 10.1. Adding the Kafka Streams API as a dependency to your Maven project This procedure shows you how to add the AMQ Streams Java clients as a dependency to your Maven project. Prerequisites A Maven project with an existing pom.xml . Procedure Add the Red Hat Maven repository to the <repositories> section of your pom.xml file. Add kafka-streams to the <dependencies> section of your pom.xml file. Build your Maven project.
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <!-- ... --> <repositories> <repository> <id>redhat-maven</id> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <!-- ... --> </project>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <!-- ... --> <dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-streams</artifactId> <version>3.1.0.redhat-00004</version> </dependency> </dependencies> <!-- ... --> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/assembly-kafka-streams-str
Using the AMQ JMS Client
Using the AMQ JMS Client Red Hat AMQ 2021.Q3 For Use with AMQ Clients 2.10
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/index
Chapter 2. Additional requirements for single node deployments
Chapter 2. Additional requirements for single node deployments Red Hat Hyperconverged Infrastructure for Virtualization is supported for deployment on a single node provided that all Support Requirements are met, with the following additions and exceptions. A single node deployment requires a physical machine with: 1 Network Interface Controller at least 12 cores at least 64GB RAM Single node deployments cannot be scaled , and are not highly available. This deployment type is lower cost, but removes the option of availability.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/ref-requirements-single-node
Migrating from version 3 to 4
Migrating from version 3 to 4 OpenShift Container Platform 4.14 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migrating_from_version_3_to_4/index
Workloads APIs
Workloads APIs OpenShift Container Platform 4.13 Reference guide for workloads APIs Red Hat OpenShift Documentation Team
[ "\"postCommit\": { \"script\": \"rake test --verbose\", }", "The above is a convenient form which is equivalent to:", "\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }", "\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }", "Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.", "\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }", "This form is only useful if the image entrypoint can handle arguments.", "\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }", "This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.", "\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }", "This form is equivalent to appending the arguments to the Command slice.", "\"postCommit\": { \"script\": \"rake test --verbose\", }", "The above is a convenient form which is equivalent to:", "\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }", "\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }", "Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.", "\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }", "This form is only useful if the image entrypoint can handle arguments.", "\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }", "This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.", "\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }", "This form is equivalent to appending the arguments to the Command slice.", "IP: An IP address allocated to the pod. Routable at least within the cluster." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/workloads_apis/index
3.3. Example: List Host Cluster Collection
3.3. Example: List Host Cluster Collection Red Hat Virtualization Manager creates a Default host cluster on installation. This example uses the Default cluster to group resources in your Red Hat Virtualization environment. The following request retrieves a representation of the cluster collection: Example 3.3. List host clusters collection Request: cURL command: Result: Note the id code of your Default host cluster. This code identifies this host cluster in relation to other resources of your virtual environment. The Default cluster is associated with the Default data center through a relationship using the id and href attributes of the data_center element. The networks sub-collection contains a list of associated network resources for this cluster. The section examines the networks collection in more detail.
[ "GET /ovirt-engine/api/clusters HTTP/1.1 Accept: application/xml", "curl -X GET -H \"Accept: application/xml\" -u [USER:PASS] --cacert [CERT] https:// [RHEVM Host] :443/ovirt-engine/api/clusters", "HTTP/1.1 200 OK Content-Type: application/xml <clusters> <cluster id=\"99408929-82cf-4dc7-a532-9d998063fa95\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95\"> <name>Default</name> <description>The default server cluster</description> <link rel=\"networks\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks\"/> <link rel=\"permissions\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/permissions\"/> <cpu id=\"Intel Penryn Family\"/> <data_center id=\"01a45ff0-915a-11e0-8b87-5254004ac988\" href=\"/ovirt-engine/api/datacenters/01a45ff0-915a-11e0-8b87-5254004ac988\"/> <memory_policy> <overcommit percent=\"100\"/> <transparent_hugepages> <enabled>false</enabled> </transparent_hugepages> </memory_policy> <scheduling_policy/> <version minor=\"0\" major=\"4\"/> <error_handling> <on_error>migrate</on_error> </error_handling> </cluster> </clusters>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_list_host_cluster_collection
Chapter 47. Configuring 802.3 link settings
Chapter 47. Configuring 802.3 link settings Auto-negotiation is a feature of the IEEE 802.3u Fast Ethernet protocol. It targets the device ports to provide an optimal performance of speed, duplex mode, and flow control for information exchange over a link. Using the auto-negotiation protocol, you have optimal performance of data transfer over the Ethernet. Note To utilize maximum performance of auto-negotiation, use the same configuration on both sides of a link. 47.1. Configuring 802.3 link settings using the nmcli utility To configure the 802.3 link settings of an Ethernet connection, modify the following configuration parameters: 802-3-ethernet.auto-negotiate 802-3-ethernet.speed 802-3-ethernet.duplex Procedure Display the current settings of the connection: You can use these values if you need to reset the parameters in case of any problems. Set the speed and duplex link settings: This command enables auto-negotiation and sets the speed of the connection to 10000 Mbit full duplex. Reactivate the connection: Verification Use the ethtool utility to verify the values of Ethernet interface enp1s0 : Additional resources nm-settings(5) man page on your system
[ "nmcli connection show Example-connection 802-3-ethernet.speed: 0 802-3-ethernet.duplex: -- 802-3-ethernet.auto-negotiate: no", "nmcli connection modify Example-connection 802-3-ethernet.auto-negotiate yes 802-3-ethernet.speed 10000 802-3-ethernet.duplex full", "nmcli connection up Example-connection", "ethtool enp1s0 Settings for enp1s0: Speed: 10000 Mb/s Duplex: Full Auto-negotiation: on Link detected: yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-802-3-link-settings_configuring-and-managing-networking
3.14. Partial Results Warnings
3.14. Partial Results Warnings For each source that is excluded from the query, a warning will be generated describing the source and the failure. These warnings can be obtained from the Statement.getWarnings() method. This method returns a SQLWarning object, but in the case of partial results warnings this object will be an instance of the org.teiid.jdbc.PartialResultsWarning class. This class can be used to obtain a list of all the failed sources by name and to obtain the specific exception thrown by each resource adaptor. Note Since JBoss Data Virtualization supports cursoring before the entire result is formed, it is possible that a data source failure will not be determined until after the first batch of results have been returned to the client. This can happen in the case of unions, but not joins. To ensure that all warnings have been accumulated, the statement should be checked after the entire result set has been read. The following is an example of how to obtain partial results warnings:
[ "statement.execute(\"set partialResultsMode true\"); ResultSet results = statement.executeQuery(\"SELECT Name FROM Accounts\"); while (results.next()) { //process the result set } SQLWarning warning = statement.getWarnings(); if(warning instanceof PartialResultsWarning) { PartialResultsWarning partialWarning = (PartialResultsWarning)warning; Collection failedConnectors = partialWarning.getFailedConnectors(); Iterator iter = failedConnectors.iterator(); while(iter.hasNext()) { String connectorName = (String) iter.next(); SQLException connectorException = partialWarning.getConnectorException(connectorName); System.out.println(connectorName + \": \" + connectorException.getMessage()); } }" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/partial_results_warnings1
Chapter 9. Configuring Secure Connections
Chapter 9. Configuring Secure Connections By default, clients and users connect to the Red Hat Directory Server over a standard connection. Standard connections do not use any encryption, so information is sent back and forth between the server and client in the clear. Directory Server supports TLS connections, STARTTLS connection, and SASL authentication, which provide layers of encryption and security that protect directory data from being read even if it is intercepted. 9.1. Requiring Secure Connections Directory Server provides the following ways of using encrypted connections: LDAPS When you use the LDAPS protocol, the connection starts using encryption and either succeeds or fails. However, no unencrypted data is ever send over the network. For this reason, prefer LDAPS instead of using STARTTLS over unencrypted LDAP. STARTTLS over LDAP Clients establish an unencrypted connection over the LDAP protocol and then send the STARTTLS command. If the command succeeds, all further communication is encrypted. Warning If the STARTTLS command fails and the client does not cancel the connection, all further data, including authentication information, is sent unencrypted over the network. SASL Simple Authentication and Security Layer (SASL) enables you to authenticate a user using external authentication methods, such as Kerberos. For details, see Section 9.10, "Setting up SASL Identity Mapping" .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/secureconnections
Installing Red Hat Virtualization as a standalone Manager with local databases
Installing Red Hat Virtualization as a standalone Manager with local databases Red Hat Virtualization 4.3 ALTERNATIVE method - Installing the Red Hat Virtualization Manager and its databases on the same server Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes how to install a standalone Manager environment - where the Red Hat Virtualization Manager is installed on either a physical server or a virtual machine hosted in another environment - with the Manager database and the Data Warehouse service and database installed on the same machine as the Manager. If this is not the configuration you want to use, see the other Installation Options in the Product Guide .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/index
Chapter 4. Fencing
Chapter 4. Fencing Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the fence daemon, fenced . When CMAN determines that a node has failed, it communicates to other cluster-infrastructure components that the node has failed. fenced , when notified of the failure, fences the failed node. Other cluster-infrastructure components determine what actions to take - that is, they perform any recovery that needs to be done. For example, DLM and GFS2, when notified of a node failure, suspend activity until they detect that fenced has completed fencing the failed node. Upon confirmation that the failed node is fenced, DLM and GFS2 perform recovery. DLM releases locks of the failed node; GFS2 recovers the journal of the failed node. The fencing program determines from the cluster configuration file which fencing method to use. Two key elements in the cluster configuration file define a fencing method: fencing agent and fencing device. The fencing program makes a call to a fencing agent specified in the cluster configuration file. The fencing agent, in turn, fences the node by means of a fencing device. When fencing is complete, the fencing program notifies the cluster manager. The High Availability Add-On provides a variety of fencing methods: Power fencing - A fencing method that uses a power controller to power off an inoperable node. storage fencing - A fencing method that disables the Fibre Channel port that connects storage to an inoperable node. Other fencing - Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others. Figure 4.1, "Power Fencing Example" shows an example of power fencing. In the example, the fencing program in node A causes the power controller to power off node D. Figure 4.2, "Storage Fencing Example" shows an example of storage fencing. In the example, the fencing program in node A causes the Fibre Channel switch to disable the port for node D, disconnecting node D from storage. Figure 4.1. Power Fencing Example Figure 4.2. Storage Fencing Example Specifying a fencing method consists of editing a cluster configuration file to assign a fencing-method name, the fencing agent, and the fencing device for each node in the cluster. The way in which a fencing method is specified depends on if a node has either dual power supplies or multiple paths to storage. If a node has dual power supplies, then the fencing method for the node must specify at least two fencing devices - one fencing device for each power supply (see Figure 4.3, "Fencing a Node with Dual Power Supplies" ). Similarly, if a node has multiple paths to Fibre Channel storage, then the fencing method for the node must specify one fencing device for each path to Fibre Channel storage. For example, if a node has two paths to Fibre Channel storage, the fencing method should specify two fencing devices - one for each path to Fibre Channel storage (see Figure 4.4, "Fencing a Node with Dual Fibre Channel Connections" ). Figure 4.3. Fencing a Node with Dual Power Supplies Figure 4.4. Fencing a Node with Dual Fibre Channel Connections You can configure a node with one fencing method or multiple fencing methods. When you configure a node for one fencing method, that is the only fencing method available for fencing that node. When you configure a node for multiple fencing methods, the fencing methods are cascaded from one fencing method to another according to the order of the fencing methods specified in the cluster configuration file. If a node fails, it is fenced using the first fencing method specified in the cluster configuration file for that node. If the first fencing method is not successful, the fencing method specified for that node is used. If none of the fencing methods is successful, then fencing starts again with the first fencing method specified, and continues looping through the fencing methods in the order specified in the cluster configuration file until the node has been fenced. For detailed information on configuring fence devices, see the corresponding chapter in the Cluster Administration manual.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/ch-fencing
5.10. Configuring Fencing for Redundant Power Supplies
5.10. Configuring Fencing for Redundant Power Supplies When configuring fencing for redundant power supplies, the cluster must ensure that when attempting to reboot a host, both power supplies are turned off before either power supply is turned back on. If the node never completely loses power, the node may not release its resources. This opens up the possibility of nodes accessing these resources simultaneously and corrupting them. Prior to Red Hat Enterprise Linux 7.2, you needed to explicitly configure different versions of the devices which used either the 'on' or 'off' actions. Since Red Hat Enterprise Linux 7.2, it is now only required to define each device once and to specify that both are required to fence the node, as in the following example.
[ "pcs stonith create apc1 fence_apc_snmp ipaddr=apc1.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith create apc2 fence_apc_snmp ipaddr=apc2.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith level add 1 node1.example.com apc1,apc2 pcs stonith level add 1 node2.example.com apc1,apc2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-redundantfence-HAAR
Chapter 8. Running the certification tests
Chapter 8. Running the certification tests RHCert CLI is the supported method to run tests. Procedure Run tests The non-interactive tag is an RHCert flag used to run all certification-related mandatory tests. Save the test result file By default, the result file is saved as /var/rhcert/save/rhcert-results-<host-name>-<timestamp>.xml .
[ "rhcert run", "rhcert-save" ]
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_workflow_guide/proc_rhosp-vnf-wf-running-the-certification-tests_rhosp-vnf-wf-setting-up-test-env
6.4. Token Operation and Policy Processing
6.4. Token Operation and Policy Processing This section discusses major operations (both explicit and implicit) that involve a token. The list below will discuss each feature and its configuration. Note See the Token Policies section in the Red Hat Certificate System Planning, Installation and Deployment Guide for general information. Format The Format operation (user-initiated) takes a token in a completely blank state as supplied by the manufacturer, and loads a Coolkey applet on it. Configuration example: Enrollment The basic enrollment operation takes a formatted token and places certs and keys onto the token in an effort to personalize the token. The following configuration example will explain how this can be controlled. The example shows basic enrollment which does not deal with renewal and internal recovery. Settings not discussed here are either covered in the Format section, or not crucial. Pin Reset The configuration for pin reset is discussed in Section 6.3, "Token Policies" , because pin reset relies on a policy to determine if it is to be legally performed or not. Renewal The configuration for renewal is discussed in Section 6.3, "Token Policies" , since renewal relies on a policy to determine if it is legal to perform or not upon an already enrolled token. Recovery Recovery is implicitly set into motion when the user of the TPS user interface marks a previously active token into an unfavorable state such as "lost" or "destroyed". Once this happens, the enrollment of a new token by the same user will adhere to the following configuration to recover the certificates from the user's old token, to this new token. The end result of this operation is that the user will have a new physical token that may contain the encryption certificates recovered from the old token, so that the user can continue to encrypt and decrypt data as needed. A new signing certificate is also usually placed on this token as shown in the sample config examples below. The following is a list of supported states into which a token can be placed manually in the TPS user interface, as seen in the configuration: tokendb._069=# - DAMAGED (1) : Corresponds to destroyed in the recovery configuration. Used when a token has been physically damaged. tokendb._070=# - PERM_LOST (2) : Corresponds to keyCompromise in the recovery configuration. Used when a token has been lost permanently. tokendb._071=# - SUSPENDED (3) : Corresponds to onHold in the recovery configuration. Used when a token has been temporarily misplaced, but the user expects to find it again. tokendb._072=# - TERMINATED (6) : Corresponds to terminated in the recovery configuration. Used to take a token out of service forever for internal reasons. Example recovery configuration: Additional settings are used to specify what kind of supported static recovery should be used when performing a recovery operation to a new token (when the original token has been marked destroyed). The following schemes are supported: Recover Last ( RecoverLast ): Recover the latest encryption certificate to be placed on the token. Generate New Key and Recover Last ( GenerateNewKeyAndRecoverLast ): Same as Recover Last, but also generate a new encryption certificate and upload it to the token as well. The new token will then have two certificates. Generate New Key ( GenerateNewKey ): Generate a new encryption certificate and place it on the token. Do not recover any old certificates. For example: The following configuration example determines how to recover tokens marked as permanently lost: Finally, the following example determines what the system will do about the signing certificate that was on the old token. In most cases, the GenerateNewKey recovery scheme should be used in order to avoid potentially having multiple copies of a signing private key available (for example, one that is recovered on a new token, and one on an old token that was permanently lost but found by somebody else). Applet Update The following example shows how to configure a Coolkey applet update operation. This operation can be performed during format, enrollment, and PIN reset operations: Some of these options have already been demonstrated in the Format section. They provide information needed to determine if applet upgrade should be allowed, where to find the applet files, and the applet version to upgrade the token to. The version in the requiredVersion maps to a file name inside the directory . Key Update This operation, which can take place during format, enrollment, and PIN reset operations, allows the user to have their Global Platform key set version upgraded from the default supplied by the manufacturer. TPS The following options will instruct the TPS to upgrade the keyset from 1 to 2 during the format operation requested on behalf of a given token. After this is done, the TKS must derive the three new keys that will be written to the token, Afterwards, the token must be used with the same TPS and TKS installation, otherwise it will become locked. You can also specify a version lower than current to downgrade the keyset instead. TKS As mentioned above, the TKS must be configured to generate the new keys to write to the token. First, the new master key identifier, 02 , must be mapped to its PKCS #11 object nickname in the TKS CS.cfg , as shown in the following example: The above will map a key set number to an actual master key which exists in the TKS NSS database. Master keys are identified by IDs such as 01 . The TKS maps these IDs to PKCS #11 object nicknames specified in the masterKeyId part of the mapping. Therefore, the first number is updated as the master key version is updated, and the second number stays consistent. When attempting to upgrade from version 1 to version 2, the mapping determines how to find the master key nickname which will be used to derive the 3 parts of the new key set. The setting of internal in the above example references the name of the token where the master key resides. It could also be an external HSM module with a name such as nethsm . The strong new_master is an example of the master key nickname itself.
[ "#specify that we want authentication for format. We almost always want this at true: op.format.userKey.auth.enable=true #specify the ldap authentication configuration, so TPS knows where to validate credentials: op.format.userKey.auth.id=ldap1 #specify the connection the the CA op.format.userKey.ca.conn=ca1 #specify id of the card manager applet on given token op.format.userKey.cardmgr_instance=A0000000030000 #specify if we need to match the visa cuid to the nist sp800sp derivation algorithm KDD value. Mostly will be false: op.format.userKey.cuidMustMatchKDD=false #enable ability to restrict key changoever to a specific range of key set: op.format.userKey.enableBoundedGPKeyVersion=true #enable the phone home url to write to the token: op.format.userKey.issuerinfo.enable=true #actual home url to write to token: op.format.userKey.issuerinfo.value=http://server.example.com:8080/tps/phoneHome #specify whether to request a login from the client. Mostly true, external reg may want this to be false: op.format.userKey.loginRequest.enable=true #Actual range of desired keyset numbers: op.format.userKey.maximumGPKeyVersion=FF op.format.userKey.minimumGPKeyVersion=01 #Whether or not to revoke certs on the token after a format, and what the reason will be if so: op.format.userKey.revokeCert=true op.format.userKey.revokeCert.reason=0 #This will roll back the reflected keyyset version of the token in the tokendb. After a failed key changeover operation. This is to keep the value in sync with reality in the tokendb. Always false, since this version of TPS avoids this situation now: op.format.userKey.rollbackKeyVersionOnPutKeyFailure=false #specify connection to the TKS: op.format.userKey.tks.conn=tks1 #where to get the actual applet file to write to the token: op.format.userKey.update.applet.directory=/usr/share/pki/tps/applets #Allows a completely blank token to be recognized by TPS. Mostly should be true: op.format.userKey.update.applet.emptyToken.enable=true #Always should be true, not supported: op.format.userKey.update.applet.encryption=true #Actual version of the applet file we want to upgrade to. This file will have a name something like: 1.4.54de7a99.ijc: op.format.userKey.update.applet.requiredVersion=1.4.54de790f #Symm key changeover: op.format.userKey.update.symmetricKeys.enable=false op.format.userKey.update.symmetricKeys.requiredVersion=1 #Make sure the token db is in sync with reality. Should always be true: op.format.userKey.validateCardKeyInfoAgainstTokenDB=true", "op.enroll.userKey.auth.enable=true op.enroll.userKey.auth.id=ldap1 op.enroll.userKey.cardmgr_instance=A0000000030000 op.enroll.userKey.cuidMustMatchKDD=false op.enroll.userKey.enableBoundedGPKeyVersion=true op.enroll.userKey.issuerinfo.enable=true op.enroll.userKey.issuerinfo.value=http://server.example.com:8080/tps/phoneHome #configure the encryption cert and keys we want on the token: #connection the the CA, which issues the certs: op.enroll.userKey.keyGen.encryption.ca.conn=ca1 #Profile id we want the CA to use to issue our encrytion cert: op.enroll.userKey.keyGen.encryption.ca.profileId=caTokenUserEncryptionKeyEnrollment #These two cover the indexes of the certs written to the token. Each cert needs a unique index or \"slot\". In our sample the enc cert will occupy slot 2 and the signing cert, shown later, will occupy slot 1. Avoid overlap with these numbers: op.enroll.userKey.keyGen.encryption.certAttrId=c2 op.enroll.userKey.keyGen.encryption.certId=C2 op.enroll.userKey.keyGen.encryption.cuid_label=USDcuidUSD #specify size of generated private key: op.enroll.userKey.keyGen.encryption.keySize=1024 op.enroll.userKey.keyGen.encryption.keyUsage=0 op.enroll.userKey.keyGen.encryption.keyUser=0 #specify pattern for what the label of the cert will look like when the cert nickname is displayed in browsers and mail clients: op.enroll.userKey.keyGen.encryption.label=encryption key for USDuseridUSD #specify if we want to overwrite certs on a re-enrollment operation. This is almost always the case: op.enroll.userKey.keyGen.encryption.overwrite=true #The next several settings specify the capabilities that the private key on the final token will inherit. For instance this will determine if the cert can be used for encryption or digital signatures. There are settings for both the private and public key. op.enroll.userKey.keyGen.encryption.private.keyCapabilities.decrypt=true op.enroll.userKey.keyGen.encryption.private.keyCapabilities.derive=false op.enroll.userKey.keyGen.encryption.private.keyCapabilities.encrypt=false op.enroll.userKey.keyGen.encryption.private.keyCapabilities.private=true op.enroll.userKey.keyGen.encryption.private.keyCapabilities.sensitive=true op.enroll.userKey.keyGen.encryption.private.keyCapabilities.sign=false op.enroll.userKey.keyGen.encryption.private.keyCapabilities.signRecover=false op.enroll.userKey.keyGen.encryption.private.keyCapabilities.token=true op.enroll.userKey.keyGen.encryption.private.keyCapabilities.unwrap=true op.enroll.userKey.keyGen.encryption.private.keyCapabilities.verify=false op.enroll.userKey.keyGen.encryption.private.keyCapabilities.verifyRecover=false op.enroll.userKey.keyGen.encryption.private.keyCapabilities.wrap=false op.enroll.userKey.keyGen.encryption.privateKeyAttrId=k4 op.enroll.userKey.keyGen.encryption.privateKeyNumber=4 op.enroll.userKey.keyGen.encryption.public.keyCapabilities.decrypt=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.derive=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.encrypt=true op.enroll.userKey.keyGen.encryption.public.keyCapabilities.private=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.sensitive=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.sign=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.signRecover=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.token=true op.enroll.userKey.keyGen.encryption.public.keyCapabilities.unwrap=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.verify=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.verifyRecover=false op.enroll.userKey.keyGen.encryption.public.keyCapabilities.wrap=true #The following index numbers correspond to the index or slot that the private and public keys occupy. The common formula we use is that the public key index will be 2 * cert id + 1, and the private key index, shown above will be 2 * cert id. In this example the cert id is 2, so the key ids will be 4 and 5 respectively. When composing these, be careful not to create conflicts. This applies to the signing key section below. op.enroll.userKey.keyGen.encryption.publicKeyAttrId=k5 op.enroll.userKey.keyGen.encryption.publicKeyNumber=5 #specify if, when a certificate is slated for revocation, based on other rules, we want to check to see if some other token is using this cert in a shared situation. If this is set to true, and this situation is found the cert will not be revoked until the last token wants to revoke this cert: op.enroll.userKey.keyGen.encryption.recovery.destroyed.holdRevocationUntilLastCredential=false #specify, if we want server side keygen, if we want to have that generated key archived to the drm. This is almost always the case, since we want the ability to later recover a cert and its encryption private key back to a new token: op.enroll.userKey.keyGen.encryption.serverKeygen.archive=true #connection to drm to generate the key for us: op.enroll.userKey.keyGen.encryption.serverKeygen.drm.conn=kra1 #specify server side keygen of the encryption private key. This most often will be desired: op.enroll.userKey.keyGen.encryption.serverKeygen.enable=true #This setting tells us how many certs we want to enroll for this TPS profile, in the case \"userKey\". Here we want 2 total certs. The next values then go on to index into the config what two types of certs we want, signing and encryption: op.enroll.userKey.keyGen.keyType.num=2 op.enroll.userKey.keyGen.keyType.value.0=signing op.enroll.userKey.keyGen.keyType.value.1=encryption #configure the signing cert and keys we want on the token the settings for these are similar to the encryption settings already discussed, except the capability flags presented below, since this is a signing key. op.enroll.userKey.keyGen.signing.ca.conn=ca1 op.enroll.userKey.keyGen.signing.ca.profileId=caTokenUserSigningKeyEnrollment op.enroll.userKey.keyGen.signing.certAttrId=c1 op.enroll.userKey.keyGen.signing.certId=C1 op.enroll.userKey.keyGen.signing.cuid_label=USDcuidUSD op.enroll.userKey.keyGen.signing.keySize=1024 op.enroll.userKey.keyGen.signing.keyUsage=0 op.enroll.userKey.keyGen.signing.keyUser=0 op.enroll.userKey.keyGen.signing.label=signing key for USDuseridUSD op.enroll.userKey.keyGen.signing.overwrite=true op.enroll.userKey.keyGen.signing.private.keyCapabilities.decrypt=false op.enroll.userKey.keyGen.signing.private.keyCapabilities.derive=false op.enroll.userKey.keyGen.signing.private.keyCapabilities.encrypt=false op.enroll.userKey.keyGen.signing.private.keyCapabilities.private=true op.enroll.userKey.keyGen.signing.private.keyCapabilities.sensitive=true op.enroll.userKey.keyGen.signing.private.keyCapabilities.sign=true op.enroll.userKey.keyGen.signing.private.keyCapabilities.signRecover=true op.enroll.userKey.keyGen.signing.private.keyCapabilities.token=true op.enroll.userKey.keyGen.signing.private.keyCapabilities.unwrap=false op.enroll.userKey.keyGen.signing.private.keyCapabilities.verify=false op.enroll.userKey.keyGen.signing.private.keyCapabilities.verifyRecover=false op.enroll.userKey.keyGen.signing.private.keyCapabilities.wrap=false op.enroll.userKey.keyGen.signing.privateKeyAttrId=k2 op.enroll.userKey.keyGen.signing.privateKeyNumber=2 op.enroll.userKey.keyGen.signing.public.keyCapabilities.decrypt=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.derive=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.encrypt=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.private=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.sensitive=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.sign=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.signRecover=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.token=true op.enroll.userKey.keyGen.signing.public.keyCapabilities.unwrap=false op.enroll.userKey.keyGen.signing.public.keyCapabilities.verify=true op.enroll.userKey.keyGen.signing.public.keyCapabilities.verifyRecover=true op.enroll.userKey.keyGen.signing.public.keyCapabilities.wrap=false op.enroll.userKey.keyGen.signing.publicKeyAttrId=k3 op.enroll.userKey.keyGen.signing.publicKeyNumber=3", "#When a token is marked destroyed, don't revoke the certs on the token unless all other tokens do not have the certs included: op.enroll.userKey.keyGen.encryption.recovery.destroyed.holdRevocationUntilLastCredential=false #specify if we even want to revoke certs a token is marked destroyed: op.enroll.userKey.keyGen.encryption.recovery.destroyed.revokeCert=false #if we want to revoke any certs here, specify the reason for revocation that will be sent to the CA: op.enroll.userKey.keyGen.encryption.recovery.destroyed.revokeCert.reason=0 #speficy if we want to revoke expired certs when marking the token destroyed: op.enroll.userKey.keyGen.encryption.recovery.destroyed.revokeExpiredCerts=false", "op.enroll.userKey.keyGen.encryption.recovery.destroyed.scheme=RecoverLast", "op.enroll.userKey.keyGen.encryption.recovery.keyCompromise.holdRevocationUntilLastCredential=false op.enroll.userKey.keyGen.encryption.recovery.keyCompromise.revokeCert=true op.enroll.userKey.keyGen.encryption.recovery.keyCompromise.revokeCert.reason=1 op.enroll.userKey.keyGen.encryption.recovery.keyCompromise.revokeExpiredCerts=false op.enroll.userKey.keyGen.encryption.recovery.keyCompromise.scheme=GenerateNewKey Section when a token is marked terminated. op.enroll.userKey.keyGen.encryption.recovery.terminated.holdRevocationUntilLastCredential=false op.enroll.userKey.keyGen.encryption.recovery.terminated.revokeCert=true op.enroll.userKey.keyGen.encryption.recovery.terminated.revokeCert.reason=1 op.enroll.userKey.keyGen.encryption.recovery.terminated.revokeExpiredCerts=false op.enroll.userKey.keyGen.encryption.recovery.terminated.scheme=GenerateNewKey This section details the recovery profile with respect to which certs and of what kind get recovered on the token. op.enroll.userKey.keyGen.recovery.destroyed.keyType.num=2 op.enroll.userKey.keyGen.recovery.destroyed.keyType.value.0=signing op.enroll.userKey.keyGen.recovery.destroyed.keyType.value.1=encryption", "op.enroll.userKey.keyGen.recovery.keyCompromise.keyType.value.0=signing op.enroll.userKey.keyGen.recovery.keyCompromise.keyType.value.1=encryption op.enroll.userKey.keyGen.recovery.onHold.keyType.num=2 op.enroll.userKey.keyGen.recovery.onHold.keyType.value.0=signing op.enroll.userKey.keyGen.recovery.onHold.keyType.value.1=encryption op.enroll.userKey.keyGen.signing.recovery.destroyed.holdRevocationUntilLastCredential=false op.enroll.userKey.keyGen.signing.recovery.destroyed.revokeCert=true op.enroll.userKey.keyGen.signing.recovery.destroyed.revokeCert.reason=0 op.enroll.userKey.keyGen.signing.recovery.destroyed.revokeExpiredCerts=false op.enroll.userKey.keyGen.signing.recovery.destroyed.scheme=GenerateNewKey op.enroll.userKey.keyGen.signing.recovery.keyCompromise.holdRevocationUntilLastCredential=false op.enroll.userKey.keyGen.signing.recovery.keyCompromise.revokeCert=true op.enroll.userKey.keyGen.signing.recovery.keyCompromise.revokeCert.reason=1 op.enroll.userKey.keyGen.signing.recovery.keyCompromise.revokeExpiredCerts=false op.enroll.userKey.keyGen.signing.recovery.keyCompromise.scheme=GenerateNewKey op.enroll.userKey.keyGen.signing.recovery.onHold.holdRevocationUntilLastCredential=false op.enroll.userKey.keyGen.signing.recovery.onHold.revokeCert=true op.enroll.userKey.keyGen.signing.recovery.onHold.revokeCert.reason=6 op.enroll.userKey.keyGen.signing.recovery.onHold.revokeExpiredCerts=false op.enroll.userKey.keyGen.signing.recovery.onHold.scheme=GenerateNewKey op.enroll.userKey.keyGen.signing.recovery.terminated.holdRevocationUntilLastCredential=false op.enroll.userKey.keyGen.signing.recovery.terminated.revokeCert=true op.enroll.userKey.keyGen.signing.recovery.terminated.revokeCert.reason=1 op.enroll.userKey.keyGen.signing.recovery.terminated.revokeExpiredCerts=false op.enroll.userKey.keyGen.signing.recovery.terminated.scheme=GenerateNewKey Configuration for the case when we mark a token \"onHold\" or temporarily lost op.enroll.userKeyTemporary.keyGen.encryption.recovery.onHold.revokeCert=true op.enroll.userKeyTemporary.keyGen.encryption.recovery.onHold.revokeCert.reason=0 op.enroll.userKeyTemporary.keyGen.encryption.recovery.onHold.scheme=RecoverLast op.enroll.userKeyTemporary.keyGen.recovery.onHold.keyType.num=2 op.enroll.userKeyTemporary.keyGen.recovery.onHold.keyType.value.0=signing op.enroll.userKeyTemporary.keyGen.recovery.onHold.keyType.value.1=encryption op.enroll.userKeyTemporary.keyGen.signing.recovery.onHold.revokeCert=true op.enroll.userKeyTemporary.keyGen.signing.recovery.onHold.revokeCert.reason=0 op.enroll.userKeyTemporary.keyGen.signing.recovery.onHold.scheme=GenerateNewKey", "op.format.userKey.update.applet.directory=/usr/share/pki/tps/applets op.format.userKey.update.applet.emptyToken.enable=true op.format.userKey.update.applet.encryption=true op.format.userKey.update.applet.requiredVersion=1.4.54de790f", "op.format.userKey.update.symmetricKeys.enable=true op.format.userKey.update.symmetricKeys.requiredVersion=2", "tks.mk_mappings.#02#01=internal:new_master tks.defKeySet.mk_mappings.#02#01=internal:new_master" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/sect-token-operation-and-policy-processing
18.3. Cache Store Configuration Details (Remote Client-Server Mode)
18.3. Cache Store Configuration Details (Remote Client-Server Mode) The following tables contain details about the configuration elements and parameters for cache store elements in JBoss Data Grid's Remote Client-Server mode: The local-cache Element The name parameter of the local-cache attribute is used to specify a name for the cache. The statistics parameter specifies whether statistics are enabled at the container level. Enable or disable statistics on a per-cache basis by setting the statistics attribute to false . The file-store Element The name parameter of the file-store element is used to specify a name for the file store. The passivation parameter determines whether entries in the cache are passivated ( true ) or if the cache store retains a copy of the contents in memory ( false ). The purge parameter specifies whether or not the cache store is purged when it is started. Valid values for this parameter are true and false . The shared parameter is used when multiple cache instances share a cache store. This parameter can be set to prevent multiple cache instances writing the same modification multiple times. Valid values for this parameter are true and false . However, the shared parameter is not recommended for the LevelDB cache store because this cache store cannot be shared. The relative-to property is the directory where the file-store stores the data. It is used to define a named path. The path property is the name of the file where the data is stored. It is a relative path name that is appended to the value of the relative-to property to determine the complete path. The maxEntries parameter provides maximum number of entries allowed. The default value is -1 for unlimited entries. The fetch-state parameter when set to true fetches the persistent state when joining a cluster. If multiple cache stores are chained, only one of them can have this property enabled. Persistent state transfer with a shared cache store does not make sense, as the same persistent store that provides the data will just end up receiving it. Therefore, if a shared cache store is used, the cache does not allow a persistent state transfer even if a cache store has this property set to true . It is recommended to set this property to true only in a clustered environment. The default value for this parameter is false. The preload parameter when set to true, loads the data stored in the cache store into memory when the cache starts. However, setting this parameter to true affects the performance as the startup time is increased. The default value for this parameter is false. The singleton parameter enables a singleton store cache store. SingletonStore is a delegating cache store used when only one instance in a cluster can interact with the underlying store. However, singleton parameter is not recommended for file-store . The store Element The class parameter specifies the class name of the cache store implementation. The property Element The name parameter specifies the name of the property. The value parameter specifies the value assigned to the property. The remote-store Element The cache parameter defines the name for the remote cache. If left undefined, the default cache is used instead. The socket-timeout parameter sets whether the value defined in SO_TIMEOUT (in milliseconds) applies to remote Hot Rod servers on the specified timeout. A timeout value of 0 indicates an infinite timeout. The default value is 60,000 ms, or one minute. The tcp-no-delay sets whether TCP_NODELAY applies on socket connections to remote Hot Rod servers. The hotrod-wrapping sets whether a wrapper is required for Hot Rod on the remote store. The remote-server Element The outbound-socket-binding parameter sets the outbound socket binding for the remote server. The binary-keyed-jdbc-store, string-keyed-jdbc-store, and mixed-keyed-jdbc-store Elements The datasource parameter defines the name of a JNDI for the datasource. The passivation parameter determines whether entries in the cache are passivated ( true ) or if the cache store retains a copy of the contents in memory ( false ). The preload parameter specifies whether to load entries into the cache during start up. Valid values for this parameter are true and false . The purge parameter specifies whether or not the cache store is purged when it is started. Valid values for this parameter are true and false . The shared parameter is used when multiple cache instances share a cache store. This parameter can be set to prevent multiple cache instances writing the same modification multiple times. Valid values for this parameter are true and false . The singleton parameter enables a singleton store cache store. SingletonStore is a delegating cache store used when only one instance in a cluster can interact with the underlying store The binary-keyed-table and string-keyed-table Elements The prefix parameter specifies a prefix string for the database table name. The id-column, data-column, and timestamp-column Elements The name parameter specifies the name of the database column. The type parameter specifies the type of the database column. The leveldb-store Element The relative-to parameter specifies the base directory to store the cache state. This value defaults to jboss.server.data.dir . The path parameter defines where, within the directory specified in the relative-to parameter, the cache state is stored. If undefined, the path defaults to the cache container name. The passivation parameter specifies whether passivation is enabled for the LevelDB cache store. Valid values are true and false . The purge parameter specifies whether the cache store is purged when it starts up. Valid values are true and false . Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/Cache_Store_Configuration_Details_Remote_Client-Server_Mode
Working with data in an S3-compatible object store
Working with data in an S3-compatible object store Red Hat OpenShift AI Self-Managed 2.18 Work with data stored in an S3-compatible object store from your workbench
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_in_an_s3-compatible_object_store/index
2.3. Metadata Models
2.3. Metadata Models Metadata model represents a collection of metadata information that describes a complete structure of data. In a example we described the field ZIPCode as a metadata object in an address book database. This meta object represents a single distinct bit of metadata information. We alluded to its parent table, StreetAddress. These meta objects, and others that would describe the other tables and columns within the database, would all combine to form a Source Metadata model for whichever enterprise information system hosts all the objects. You can have Source Models within your collection of metadata models. These represent physical data storage locations. You can also have View Models, which model the business view of the data. Each contains one type of metadata or another. For more information about difference between Source and View metadata, see Section Source and View Metadata.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/metadata_models
Chapter 13. Managing Indexes
Chapter 13. Managing Indexes Indexing makes searching for and retrieving information easier by classifying and organizing attributes or values. This chapter describes the searching algorithm itself, placing indexing mechanisms in context, and then describes how to create, delete, and manage indexes. 13.1. About Indexes This section provides an overview of indexing in Directory Server. It contains the following topics: Section 13.1.1, "About Index Types" Section 13.1.2, "About Default and Database Indexes" Section 13.1.3, "Overview of the Searching Algorithm" Section 13.1.5, "Balancing the Benefits of Indexing" 13.1.1. About Index Types Indexes are stored in files in the directory's databases. The names of the files are based on the indexed attribute, not the type of index contained in the file. Each index file may contain multiple types of indexes if multiple indexes are maintained for the specific attribute. For example, all indexes maintained for the common name attribute are contained in the cn.db file. Directory Server supports the following types of index: Presence index (pres) contains a list of the entries that contain a particular attribute, which is very useful for searches. For example, it makes it easy to examine any entries that contain access control information. Generating an aci.db file that includes a presence index efficiently performs the search for ACI=* to generate the access control list for the server. Equality index (eq) improves searches for entries containing a specific attribute value. For example, an equality index on the cn attribute allows a user to perform the search for cn=Babs Jensen far more efficiently. Approximate index (approx) is used for efficient approximate or sounds-like searches. For example, an entry may include the attribute value cn=Firstname M Lastname . An approximate search would return this value for searches against cn~=Firstname Lastname , cn~=Firstname , or cn~=Lastname . Similarly, a search against l~=San Fransisco (note the misspelling) would return entries including l=San Francisco . Substring index (sub) is a costly index to maintain, but it allows efficient searching against substrings within entries. Substring indexes are limited to a minimum of three characters for each entry. For example, searches of the form cn=*derson , match the common names containing strings such as Bill Anderson , Jill Henderson , or Steve Sanderson . Similarly, the search for telephoneNumber= *555* returns all the entries in the directory with telephone numbers that contain 555 . International index speeds up searches for information in international directories. The process for creating an international index is similar to the process for creating regular indexes, except that it applies a matching rule by associating an object identifier (OID) with the attributes to be indexed. The supported locales and their associated OIDs are listed in Appendix D, Internationalization . If there is a need to configure the Directory Server to accept additional matching rules, contact Red Hat Consulting. 13.1.2. About Default and Database Indexes Directory Server contains a set of default indexes. When you create a new database, Directory Server copies these default indexes from cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config to the new database. Then the database only uses the copy of these indexes, which are stored in cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . Note Directory Server does not replicate settings in the cn=config entry. Therefore, you can configure indexes differently on servers that are part of a replication topology. For example, in an environment with cascading replication, you do not need to create custom indexes on a hub, if clients do not read data from the hub. To display the Directory Server default indexes: Note If you update the default index settings stored in cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config , the changes are not applied to the individual databases in cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . To display the indexes of an individual database: 13.1.3. Overview of the Searching Algorithm Indexes are used to speed up searches. To understand how the directory uses indexes, it helps to understand the searching algorithm. Each index contains a list of attributes (such as the cn , common name, attribute) and a list of IDs of the entries which contain the indexed attribute value: An LDAP client application sends a search request to the directory. The directory examines the incoming request to make sure that the specified base DN matches a suffix contained by one or more of its databases or database links. If they do match, the directory processes the request. If they do not match, the directory returns an error to the client indicating that the suffix does not match. If a referral has been specified in the nsslapd-referral attribute under cn=config , the directory also returns the LDAP URL where the client can attempt to pursue the request. The Directory Server examines the search filter to see what indexes apply, and it attempts to load the list of entry IDs from each index that satisfies the filter. The ID lists are combined based on whether the filter used AND or OR joins. Each filter component is handled independently and returns an ID list. If the list of entry IDs is larger than the configured ID list scan limit or if there is no index defined for the attribute, then Directory Server sets the results for this filtercomponent to allids . If, after applying the logical operations to the results of the individual search components the list is still ALLIDs it searches every entry in the database. This is an unindexed search. The Directory Server reads every entry from the id2entry.db database or the entry cache for every entry ID in the ID list (or from the entire database for an unindexed search). The server then checks the entries to see if they match the search filter. Each match is returned as it is found. The server continues through the list of IDs until it has searched all candidate entries or until it hits one of the configured resource limits. (Resource limits are listed in Section 14.5.3, "Setting User and Global Resource Limits Using the Command Line" .) Note It's possible to set separate resource limits for searches using the simple paged results control. For example, administrators can set high or unlimited size and look-through limits with paged searches, but use the lower default limits for non-paged searches. 13.1.4. Approximate Searches In addition, the directory uses a variation of the metaphone phonetic algorithm to perform searches on an approximate index. Each value is treated as a sequence of words, and a phonetic code is generated for each word. Note The metaphone phonetic algorithm in Directory Server supports only US-ASCII letters. Therefore, use approximate indexing only with English values. Values entered on an approximate search are similarly translated into a sequence of phonetic codes. An entry is considered to match a query if both of the following are true: All of the query string codes match the codes generated in the entry string. All of the query string codes are in the same order as the entry string codes. Name in the Directory (Phonetic Code) Query String (Phonetic code) Match Comments Alice B Sarette (ALS B SRT) Alice Sarette (ALS SRT) Matches. Codes are specified in the correct order. Alice Sarrette (ALS SRT) Matches. Codes are specified in the correct order, despite the misspelling of Sarette. Surette (SRT) Matches. The generated code exists in the original name, despite the misspelling of Sarette. Bertha Sarette (BR0 SRT) No match. The code BR0 does not exist in the original name. Sarette, Alice (SRT ALS) No match. The codes are not specified in the correct order. 13.1.5. Balancing the Benefits of Indexing Before creating new indexes, balance the benefits of maintaining indexes against the costs. Approximate indexes are not efficient for attributes commonly containing numbers, such as telephone numbers. Substring indexes do not work for binary attributes. Equality indexes should be avoided if the value is big (such as attributes intended to contain photographs or passwords containing encrypted data). Maintaining indexes for attributes not commonly used in a search increases overhead without improving global searching performance. Attributes that are not indexed can still be specified in search requests, although the search performance may be degraded significantly, depending on the type of search. The more indexes you maintain, the more disk space you require. Indexes can become very time-consuming. For example: The Directory Server receives an add or modify operation. The Directory Server examines the indexing attributes to determine whether an index is maintained for the attribute values. If the created attribute values are indexed, then Directory Server adds or deletes the new attribute values from the index. The actual attribute values are created in the entry. For example, the Directory Server adds the entry: The Directory Server maintains the following indexes: Equality, approximate, and substring indexes for cn (common name) and sn (surname) attributes. Equality and substring indexes for the telephone number attribute. Substring indexes for the description attribute. When adding that entry to the directory, the Directory Server must perform these steps: Create the cn equality index entry for John and John Doe . Create the appropriate cn approximate index entries for John and John Doe . Create the appropriate cn substring index entries for John and John Doe . Create the sn equality index entry for Doe . Create the appropriate sn approximate index entry for Doe . Create the appropriate sn substring index entries for Doe . Create the telephone number equality index entry for 408 555 8834 . Create the appropriate telephone number substring index entries for 408 555 8834 . Create the appropriate description substring index entries for Manufacturing lead for the Z238 line of widgets . A large number of substring entries are generated for this string. As this example shows, the number of actions required to create and maintain databases for a large directory can be resource-intensive. 13.1.6. Indexing Limitations You cannot index virtual attributes, such as nsrole and cos_attribute . Virtual attributes contain computed values. If you index these attributes, Directory Server can return an invalid set of entries to direct and internal searches.
[ "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config\" '(objectClass=nsindex)'", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend index list database_name", "dn: cn=John Doe,ou=People,dc=example,dc=com objectclass: top objectClass: person objectClass: orgperson objectClass: inetorgperson cn: John Doe cn: John sn: Doe ou: Manufacturing ou: people telephoneNumber: 408 555 8834 description: Manufacturing lead for the Z238 line of widgets." ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Managing_Indexes
Chapter 5. Practical tutorial: Connecting to a data source
Chapter 5. Practical tutorial: Connecting to a data source This exercise is optional and relies on you having an external data source available to use. In the exercise, we are going to connect to an external data source via the Dashboard Builder. We are going to be largely covering the same ground as in the exercise above but we will be going into a little more detail to make you really familiar and comfortable with the technology. You can connect either to a JNDI data source, that is, a data source set up and accessible from the application container, or directly to the data source as a custom data source, if the application container has the correct JDBC driver deployed. Here is how you connect to an external data source: 5.1. Prerequisites Name Description Red Hat JBoss Data Virtualization Red Hat JBoss Data Virtualization must be installed and running. (Please refer back to the Getting Started Guide if you need a refresher on this.) Dashboard Builder Account and Password You must have a valid account and password. (You would have set this up during installation.) A data source You must have a data source to which you can connect. Make sure the data source is up and running and that the application server has access to the data source. (Check the driver, the log-in credentials,and so forth. In Red Hat JBoss EAP 6, you can do so in the Management Console under Subsystems Connector Datasources .) In Dashboard Builder, on the Tree Menu (by default located on the Showcase perspective), go to Administration External connections. On the displayed External Connection panel, click the New DataSource button. Select the data source type ( JNDI or Custom DataSource ) and provide the respective data source parameters. You must now create a new data source: To create a new data provider, do the following: In the Tree Menu (the panel in the lateral menu of the Showcase workspace), click Administration Data providers. In the Data Providers panel, click the Create new data provider button. In the updated Data Providers panel, select the type of the data provider (depending on the source you want the data provider to operate on) from the Type menu. Define the data provider parameters: Data provider over a CSV file Name: user-friendly name and its locale CSV file URL: the url of the file (for example, file:///home/me/example.csv ) Data separator: the symbol used as separator in the CSV file (the default value is semicolon; if using comma as the separator sign, make sure to adapt the number format if applicable) Quoting symbol: the symbol used for quotes (the default value is the double-quotes symbol; note that the symbol may vary depending on the locale) Escaping symbol: the symbol used for escaping the following symbol in order to keep its literal value Date format: date and time format Number format: the format of numbers as resolved to thousands and decimals Data provider over a database (SQL query) Name: user-friendly name and its locale Data source: the data source to query (the default value is local, which allows you to query the Dashboard Builder database) Query: query that returns the required data Click Attempt data load to verify the parameters are correct. Click Save . In the table with the detected data, define the data type and if necessary provide a user-friendly name for the data. Click Save. The data provider can now be visualized in an indicator on a page of your choice. , you must create a workspace again: Click the Create workspace button on the top menu. In the Create workspace table on the right, set the workspace parameters: Name: workspace name and its locale Title: workspace title and its locale Skin: skin to be applied on the workspace resources Envelope: envelope to be applied on the workspace resources Click Create workspace . Optionally, click the workspace name in the tree menu on the left and in the area with workspace properties on the right define additional workspace parameters: URL: the workspace URL User home search: the home page setting If set to Role assigned page, the home page as as in the page permissions is applied; that is, every role can have a different page displayed as its home page. If set to Current page, all users will use the current home page as their home page. To create a new page, do the following: Make sure you are in the correct workspace. to the Page dropdown box in the top menu, click the Create new page button. In the Create new page table on the right, set the page parameters: Name: page name and its locale Parent page: parent page of the new page Skin: skin to be applied on the page Envelope: envelope to be applied on the page Page layout: layout of the page Click Create new page . Optionally, click the page name in the tree menu on the left and in the area with workspace properties on the right define additional page parameters: URL: the page URL Visible page: visibility of the page Spacing between regions and panels Although users are usually authorized using the authorization method setup for the underlying application container (on Red Hat JBoss EAP, the other security domain by default), the Red Hat JBoss Dashboard Builder has its own role-based access control (RBAC) management tool to facilitate permission management on an individual page or multiple pages. To define permissions on a page or all workspace pages for a role, follow these steps: On the top menu, click the General configuration button. Under the Workspace node on the left, locate the Page or the Pages node. Under the Page or Pages node, click the Page permissions node. In the Page permissions area on the right, delete previously defined permission definition if applicable and define the rights for the required role: In the Permission assignation table, locate the Select role drop-down menu and pick the respective role. In the Actions column of the table, enable or disable individual permissions. Click Save . Although users are usually authorized using the authorization method setup for the underlying application container (on Red Hat JBoss EAP, the other security domain by default), the Red Hat JBoss Dashboard Builder has its own role-based access control (RBAC) management tool to facilitate permission management on an individual page or multiple pages. A panel is a GUI widget which can be placed on a page. There are three main types of panels: Dashboard panels are the primary business activity management panels and include the following: Data provider manager: a panel with a list of available data providers and data provider management options Filter and Drill-down: a panel that displays all KPIs and their values to facilitate filtering in indicators on the given page defined over a data provider HTML Editor panel: a panel with static content Key Performance Indicator (indicator): a panel that visualizes the data of a data provider Navigation panels are panels that provide navigation functions and include the following: Breadcrumb: a panel with the full page hierarchy pointing to the current page Language menu: a panel with available locales (by default in the top center) Logout panel: a panel with the name of the currently logged-in user and the logout button Page menu custom: a panel with vertically arranged links to all pages in the workspace (the list of pages can be adjusted) and general controls for the HTML source of the page Page menu vertical: a panel with vertically arranged links to all pages in the workspace (the list of pages can be adjusted) Page menu horizontal: a panel with horizontally arranged links to all pages in the workspace (the list of pages can be adjusted) Tree menu: a panel with the links to essential features such as Administration , Home (on the Home page of the Showcase workspace displayed on the left, in the lateral menu) Workspace menu custom: a panel with links to available workspaces (the list of workspaces can be adjusted) and general controls for the HTML source of the workspace Workspace menu horizontal: a horizontal panel with links to available workspaces (the list of workspaces can be adjusted) Workspace menu vertical: a vertical panel with links to available workspaces (the list of workspaces can be adjusted) System panels are panels that provide access to system setting and administration facilities and include the following: Data source manager: a panel for management of external data sources Export dashboards: a panel export of dashboards Export/Import workspaces: a panel for exporting and importing of workspaces We are now going to add a panel: Make sure the respective page is open (in the Page drop-down menu of the top menu select the page). In the top menu, click the Create a new panel in current page button. In the displayed dialog box, expand the panel type you want to add ( Dashboard , Navigation , or System ) and click the panel you wish to add. From the Components menu on the left, drag and drop the name of an existing panel instance or the Create panel item into the required location on the page. If inserting a new indicator, the Panel view with the graph settings will appear. Define the graph details and close the dialog. If adding an instance of an already existing indicator, you might not be able to use it, if it is linked to the KPIs on the particular original page. In such a case, create a new panel. If you need to, edit the content of the newly-added panel.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/using_the_dashboard_builder/practical_tutorial_connecting_to_a_data_source
Chapter 14. The CarMart Quickstarts
Chapter 14. The CarMart Quickstarts Red Hat JBoss Data Grid includes a transactional and non-transactional CarMart quickstart. The CarMart quickstart is a simple web application that uses JBoss Data Grid instead of a relational database. Information about each car is stored in a cache. Caches are configured declaratively or programmatically depending on the usage mode. Features The CarMart quickstart offers the following features: List all cars Add new cars Remove cars View statistics for caches, such as hits, stores, and retrievals Usage Modes The CarMart quickstart can be used in the following JBoss Data Grid usage modes: Remote Client-Server Mode, where the application includes the Hot Rod client to communicate with a remote JBoss Data Grid server. Library Mode, where all libraries are bundled with the application in the form of jar files. Location JBoss Data Grid's CarMart quickstart is available at the following location: jboss-datagrid-{VERSION}-quickstarts/ Report a bug 14.1. About the CarMart Transactional Quickstart The transactional version of the CarMart quickstart is a simple web application that uses Red Hat JBoss Data Grid instead of a relational database. Information about each car is stored in a cache. Caches are configured declaratively or programmatically (depending on the usage mode) and run in the same Java Virtual Machine (JVM) as the web application. Features The Transactional CarMart Quickstart offers the following features: List all cars Add new cars Add new cars with rollback Remove cars View statistics for caches, such as hits, stores, and retrievals Usage Modes The Transactional CarMart Quickstart can only be used in JBoss Data Grid's Library mode. A standalone transaction manager from JBoss Transactions is used when the Transactional CarMart Quickstart is run in Red Hat JBoss Enterprise Web Server 2.x. Location JBoss Data Grid's Transactional CarMart Quickstart can be found at the following location: jboss-datagrid-{VERSION}-quickstarts/carmart-tx Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-the_carmart_quickstarts
Chapter 6. Applying patches with kernel live patching
Chapter 6. Applying patches with kernel live patching You can use the Red Hat Enterprise Linux kernel live patching solution to patch a running kernel without rebooting or restarting any processes. With this solution, system administrators: Can immediately apply critical security patches to the kernel. Do not have to wait for long-running tasks to complete, for users to log off, or for scheduled downtime. Control the system's uptime more and do not sacrifice security or stability. Note that not every critical or important CVE will be resolved using the kernel live patching solution. Our goal is to reduce the required reboots for security-related patches, not to eliminate them entirely. For more details about the scope of live patching, see the Customer Portal Solutions article . Warning Some incompatibilities exist between kernel live patching and other kernel subcomponents. Read the Section 6.1, "Limitations of kpatch" section carefully before using kernel live patching. Note For details about the support cadence of kernel live patching updates, see: Kernel Live Patch Support Cadence Update Kernel Live Patch life cycles 6.1. Limitations of kpatch The kpatch feature is not a general-purpose kernel upgrade mechanism. It is used for applying simple security and bug fix updates when rebooting the system is not immediately possible. Do not use the SystemTap or kprobe tools during or after loading a patch. The patch could fail to take effect until after such probes have been removed. 6.2. Support for third-party live patching The kpatch utility is the only kernel live patching utility supported by Red Hat with the RPM modules provided by Red Hat repositories. Red Hat will not support any live patches which were not provided by Red Hat itself. For support of a third-party live patch, contact the vendor that provided the patch. For any system running with third-party live patches, Red Hat reserves the right to ask for reproduction with Red Hat shipped and supported software. In the event that this is not possible, we require a similar system and workload be deployed on your test environment without live patches applied, to confirm if the same behavior is observed. For more information about third-party software support policies, see How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors or guest operating systems? 6.3. Access to kernel live patches Kernel live patching capability is implemented as a kernel module ( .ko file) that is delivered as an RPM package. All customers have access to kernel live patches, which are delivered through the usual channels. However, customers who do not subscribe to an extended support offering will lose access to new patches for the current minor release once the minor release becomes available. For example, customers with standard subscriptions will only be able to live patch RHEL 8.2 kernels until RHEL 8.3 is released. 6.4. Components of kernel live patching The components of kernel live patching are as follows: Kernel patch module The delivery mechanism for kernel live patches. A kernel module which is built specifically for the kernel being patched. The patch module contains the code of the desired fixes for the kernel. The patch modules register with the livepatch kernel subsystem and provide information about original functions to be replaced, with corresponding pointers to the replacement functions. Kernel patch modules are delivered as RPMs. The naming convention is kpatch_<kernel version>_<kpatch version>_<kpatch release> . The "kernel version" part of the name has dots and dashes replaced with underscores . The kpatch utility A command-line utility for managing patch modules. The kpatch service A systemd service required by multiuser.target . This target loads the kernel patch module at boot time. 6.5. How kernel live patching works The kpatch kernel patching solution uses the livepatch kernel subsystem to redirect old functions to new ones. When a live kernel patch is applied to a system, the following things happen: The kernel patch module is copied to the /var/lib/kpatch/ directory and registered for re-application to the kernel by systemd on boot. The kpatch module is loaded into the running kernel and the patched functions are registered to the ftrace mechanism with a pointer to the location in memory of the new code. When the kernel accesses the patched function, it is redirected by the ftrace mechanism which bypasses the original functions and redirects the kernel to patched version of the function. Figure 6.1. How kernel live patching works 6.6. Enabling kernel live patching A kernel patch module is delivered in an RPM package, specific to the version of the kernel being patched. Each RPM package will be cumulatively updated over time. The following subsections describe how to ensure you receive all future cumulative live patching updates for a given kernel. Warning Red Hat does not support any third party live patches applied to a Red Hat supported system. 6.6.1. Subscribing to the live patching stream This procedure describes installing a particular live patching package. By doing so, you subscribe to the live patching stream for a given kernel and ensure that you receive all future cumulative live patching updates for that kernel. Warning Because live patches are cumulative, you cannot select which individual patches are deployed for a given kernel. Prerequisites Root permissions Procedure Optionally, check your kernel version: Search for a live patching package that corresponds to the version of your kernel: Install the live patching package: The command above installs and applies the latest cumulative live patches for that specific kernel only. The live patching package contains a patch module, if the package's version is 1-1 or higher. In that case the kernel will be automatically patched during the installation of the live patching package. The kernel patch module is also installed into the /var/lib/kpatch/ directory to be loaded by the systemd system and service manager during the future reboots. Note If there are not yet any live patches available for the given kernel, an empty live patching package will be installed. An empty live patching package will have a kpatch_version-kpatch_release of 0-0, for example kpatch-patch-3_10_0-1062-0-0.el7.x86_64.rpm . The installation of the empty RPM subscribes the system to all future live patches for the given kernel. Optionally, verify that the kernel is patched: The output shows that the kernel patch module has been loaded into the kernel, which is now patched with the latest fixes from the kpatch-patch-3_10_0-1062-1-1.el7.x86_64.rpm package. Additional resources For more information about the kpatch command-line utility, see the kpatch(1) manual page. Refer to the relevant sections of the System Administrator's Guide for further information about software packages in RHEL 7. 6.7. Updating kernel patch modules Since kernel patch modules are delivered and applied through RPM packages, updating a cumulative kernel patch module is like updating any other RPM package. Prerequisites Root permissions The system is subscribed to the live patching stream, as described in Section 6.6.1, "Subscribing to the live patching stream" . Procedure Update to a new cumulative version for the current kernel: The command above automatically installs and applies any updates that are available for the currently running kernel. Including any future released cumulative live patches. Alternatively, update all installed kernel patch modules: Note When the system reboots into the same kernel, the kernel is automatically live patched again by the kpatch.service service. Additional resources For further information about updating software packages, see the relevant sections of System Administrator's Guide . 6.8. Disabling kernel live patching In case system administrators encountered some unanticipated negative effects connected with the Red Hat Enterprise Linux kernel live patching solution they have a choice to disable the mechanism. The following sections describe the ways how to disable the live patching solution. Important Currently, Red Hat does not support reverting live patches without rebooting your system. In case of any issues, contact our support team. 6.8.1. Removing the live patching package The following procedure describes how to disable the Red Hat Enterprise Linux kernel live patching solution by removing the live patching package. Prerequisites Root permissions The live patching package is installed. Procedure Select the live patching package: The example output above lists live patching packages that you installed. Remove the live patching package: When a live patching package is removed, the kernel remains patched until the reboot, but the kernel patch module is removed from disk. After the reboot, the corresponding kernel will no longer be patched. Reboot your system. Verify that the live patching package has been removed: The command displays no output if the package has been successfully removed. Optionally, verify that the kernel live patching solution is disabled: The example output shows that the kernel is not patched and the live patching solution is not active because there are no patch modules that are currently loaded. Additional resources For more information about the kpatch command-line utility, see the kpatch(1) manual page. For further information about working with software packages, see the relevant sections of System Administrator's Guide . 6.8.2. Uninstalling the kernel patch module The following procedure describes how to prevent the Red Hat Enterprise Linux kernel live patching solution from applying a kernel patch module on subsequent boots. Prerequisites Root permissions A live patching package is installed. A kernel patch module is installed and loaded. Procedure Select a kernel patch module: Uninstall the selected kernel patch module: Note that the uninstalled kernel patch module is still loaded: When the selected module is uninstalled, the kernel remains patched until the reboot, but the kernel patch module is removed from disk. Reboot your system. Optionally, verify that the kernel patch module has been uninstalled: The example output above shows no loaded or installed kernel patch modules, therefore the kernel is not patched and the kernel live patching solution is not active. Additional resources For more information about the kpatch command-line utility, refer to the kpatch(1) manual page. 6.8.3. Disabling kpatch.service The following procedure describes how to prevent the Red Hat Enterprise Linux kernel live patching solution from applying all kernel patch modules globally on subsequent boots. Prerequisites Root permissions A live patching package is installed. A kernel patch module is installed and loaded. Procedure Verify kpatch.service is enabled: Disable kpatch.service : Note that the applied kernel patch module is still loaded: Reboot your system. Optionally, verify the status of kpatch.service : The example output testifies that kpatch.service has been disabled and is not running. Thereby, the kernel live patching solution is not active. Verify that the kernel patch module has been unloaded: The example output above shows that the kernel patch module is still installed but the kernel is not patched. Additional resources For more information about the kpatch command-line utility, see the kpatch(1) manual page. For more information about the systemd system and service manager, unit configuration files, their locations, as well as a complete list of systemd unit types, see the relevant sections in System Administrator's Guide .
[ "uname -r 3.10.0-1062.el7.x86_64", "yum search USD(uname -r)", "yum install \"kpatch-patch = USD(uname -r)\"", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64) ...", "yum update \"kpatch-patch = USD(uname -r)\"", "yum update \"kpatch-patch*\"", "yum list installed | grep kpatch-patch kpatch-patch-3_10_0-1062.x86_64 1-1.el7 @@commandline ...", "yum remove kpatch-patch-3_10_0-1062.x86_64", "yum list installed | grep kpatch-patch", "kpatch list Loaded patch modules:", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64) ...", "kpatch uninstall kpatch_3_10_0_1062_1_1 uninstalling kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64)", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: < NO_RESULT >", "kpatch list Loaded patch modules:", "systemctl is-enabled kpatch.service enabled", "systemctl disable kpatch.service Removed /etc/systemd/system/multi-user.target.wants/kpatch.service.", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64)", "systemctl status kpatch.service ● kpatch.service - \"Apply kpatch kernel patches\" Loaded: loaded (/usr/lib/systemd/system/kpatch.service; disabled; vendor preset: disabled) Active: inactive (dead)", "kpatch list Loaded patch modules: Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/kernel_administration_guide/applying_patches_with_kernel_live_patching
2.2. Support for TCO Watchdog and I2C (SMBUS) on Intel Communications Chipset 89xx Series
2.2. Support for TCO Watchdog and I2C (SMBUS) on Intel Communications Chipset 89xx Series Red Hat Enterprise Linux 7.1 adds support for TCO Watchdog and I2C (SMBUS) on the 89xx series Intel Communications Chipset (formerly Coleto Creek).
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/sect-hardware_enablement-intel_coleto_creek
Chapter 2. Configuring a private cluster
Chapter 2. Configuring a private cluster After you install an OpenShift Container Platform version 4.7 cluster, you can set some of its core components to be private. 2.1. About private clusters By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your cluster. DNS If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster's own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps , for the Ingress object, and api , for the API server. The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster. Ingress Controller Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets. You can replace the default Ingress Controller with an internal one. API server By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic. On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster's access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path. On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer. On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster. 2.2. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 2.3. Setting the Ingress Controller to private After you deploy a cluster, you can modify its Ingress Controller to use only a private zone. Procedure Modify the default Ingress Controller to use only an internal endpoint: USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF Example output ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced The public DNS entry is removed, and the private zone entry is updated. 2.4. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for AWS or Azure, take the following actions: Locate and delete appropriate load balancer component. For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. For Azure, delete the api-internal rule for the load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers: Important You can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers. From your terminal, list the cluster machines: USD oc get machine -n openshift-machine-api Example output NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m You modify the control plane machines, which contain master in the name, in the following step. Remove the external load balancer from each control plane machine. Edit a control plane Machine object to remove the reference to the external load balancer: USD oc edit machines -n openshift-machine-api <master_name> 1 1 Specify the name of the control plane, or master, Machine object to modify. Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification: ... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network 1 2 Delete this line. Repeat this process for each of the machines that contains master in the name.
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <master_name> 1", "spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/post-installation_configuration/configuring-private-cluster
Chapter 1. Red Hat OpenShift Service on AWS CLI tools overview
Chapter 1. Red Hat OpenShift Service on AWS CLI tools overview A user performs a range of operations while working on Red Hat OpenShift Service on AWS (ROSA) such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs ROSA offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in ROSA: OpenShift CLI ( oc ) : This is one of the more commonly used developer CLI tools. It helps both cluster administrators and developers to perform end-to-end operations across ROSA using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in Red Hat OpenShift Service on AWS, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. ROSA CLI ( rosa ) : Use the rosa CLI to create, update, manage, and delete ROSA clusters and resources.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cli_tools/cli-tools-overview
4.6. Booleans
4.6. Booleans Booleans allow parts of SELinux policy to be changed at runtime, without any knowledge of SELinux policy writing. This allows changes, such as allowing services access to NFS volumes, without reloading or recompiling SELinux policy. 4.6.1. Listing Booleans For a list of Booleans, an explanation of what each one is, and whether they are on or off, run the semanage boolean -l command as the Linux root user. The following example does not list all Booleans and the output is shortened for brevity: Note To have more detailed descriptions, install the selinux-policy-devel package. The SELinux boolean column lists Boolean names. The Description column lists whether the Booleans are on or off, and what they do. The getsebool -a command lists Booleans, whether they are on or off, but does not give a description of each one. The following example does not list all Booleans: Run the getsebool boolean-name command to only list the status of the boolean-name Boolean: Use a space-separated list to list multiple Booleans: 4.6.2. Configuring Booleans Run the setsebool utility in the setsebool boolean_name on/off form to enable or disable Booleans. The following example demonstrates configuring the httpd_can_network_connect_db Boolean: Procedure 4.5. Configuring Booleans By default, the httpd_can_network_connect_db Boolean is off, preventing Apache HTTP Server scripts and modules from connecting to database servers: To temporarily enable Apache HTTP Server scripts and modules to connect to database servers, enter the following command as root: Use the getsebool utility to verify the Boolean has been enabled: This allows Apache HTTP Server scripts and modules to connect to database servers. This change is not persistent across reboots. To make changes persistent across reboots, run the setsebool -P boolean-name on command as root: [3] 4.6.3. Shell Auto-Completion It is possible to use shell auto-completion with the getsebool , setsebool , and semanage utilities. Use the auto-completion with getsebool and setsebool to complete both command-line parameters and Booleans. To list only the command-line parameters, add the hyphen character ("-") after the command name and hit the Tab key: To complete a Boolean, start writing the Boolean name and then hit Tab : The semanage utility is used with several command-line arguments that are completed one by one. The first argument of a semanage command is an option, which specifies what part of SELinux policy is managed: Then, one or more command-line parameters follow: Finally, complete the name of a particular SELinux entry, such as a Boolean, SELinux user, domain, or another. Start typing the entry and hit Tab : Command-line parameters can be chained in a command: [3] To temporarily revert to the default behavior, as the Linux root user, run the setsebool httpd_can_network_connect_db off command. For changes that persist across reboots, run the setsebool -P httpd_can_network_connect_db off command.
[ "~]# semanage boolean -l SELinux boolean State Default Description smartmon_3ware (off , off) Determine whether smartmon can mpd_enable_homedirs (off , off) Determine whether mpd can traverse", "~]USD getsebool -a cvs_read_shadow --> off daemons_dump_core --> on", "~]USD getsebool cvs_read_shadow cvs_read_shadow --> off", "~]USD getsebool cvs_read_shadow daemons_dump_core cvs_read_shadow --> off daemons_dump_core --> on", "~]USD getsebool httpd_can_network_connect_db httpd_can_network_connect_db --> off", "~]# setsebool httpd_can_network_connect_db on", "~]USD getsebool httpd_can_network_connect_db httpd_can_network_connect_db --> on", "~]# setsebool -P httpd_can_network_connect_db on", "~]# setsebool -[Tab] -P", "~]USD getsebool samba_[Tab] samba_create_home_dirs samba_export_all_ro samba_run_unconfined samba_domain_controller samba_export_all_rw samba_share_fusefs samba_enable_home_dirs samba_portmapper samba_share_nfs", "~]# setsebool -P virt_use_[Tab] virt_use_comm virt_use_nfs virt_use_sanlock virt_use_execmem virt_use_rawip virt_use_usb virt_use_fusefs virt_use_samba virt_use_xserver", "~]# semanage [Tab] boolean export import login node port dontaudit fcontext interface module permissive user", "~]# semanage fcontext -[Tab] -a -D --equal --help -m -o --add --delete -f -l --modify -S -C --deleteall --ftype --list -n -t -d -e -h --locallist --noheading --type", "~]# semanage fcontext -a -t samba<tab> samba_etc_t samba_secrets_t sambagui_exec_t samba_share_t samba_initrc_exec_t samba_unconfined_script_exec_t samba_log_t samba_unit_file_t samba_net_exec_t", "~]# semanage port -a -t http_port_t -p tcp 81" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-booleans
7.168. openssl
7.168. openssl 7.168.1. RHBA-2013:0443 - openssl bug fix update Updated openssl packages that fix four bugs are now available for Red Hat Enterprise Linux 6. The openssl packages provide a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library. Bug Fixes BZ# 770872 Prior to this update, the pkgconfig configuration files of OpenSSL libraries contained an invalid libdir value. This update modifies the underlying code to use the correct libdir value. BZ#800088 Prior to this update, the openssl function "BIO_new_accept()" failed to listen on IPv4 addresses when this function was invoked with the "*:port" parameter. As a consequence, users failed to connect via IPv4 to a server that used this function call with the "*:port" parameter. This update modifies this function to listen on IPv4 address with this parameter as expected. BZ# 841645 Prior to this update, encrypted private key files that were saved in FIPS mode were corrupted because the PEM encryption uses hash algorithms that are not available in FIPS mode. This update uses the PKCS#8 encrypted format to write private keys to files in FIPS mode. This file format uses only algorithms that are available in FIPS mode. BZ# 841645 The manual page for "rand", the pseudo-random number generator, is named "sslrand" to avoid conflict with the manual page for the C library "rand()" function. This update provides the "openssl" manual page update to reflect this. All users of openssl are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/openssl
Chapter 1. Navigating CodeReady Workspaces
Chapter 1. Navigating CodeReady Workspaces This chapter describes available methods to navigate Red Hat CodeReady Workspaces. Section 1.1, "Navigating CodeReady Workspaces using the Dashboard" Section 1.2, "Importing certificates to browsers" Section 1.3, "Accessing CodeReady Workspaces from OpenShift Developer Perspective" 1.1. Navigating CodeReady Workspaces using the Dashboard The Dashboard is accessible on your cluster from a URL such as \https://codeready-<openshift_deployment_name>.<domain_name>/dashboard . This section describes how to access this URL on OpenShift. 1.1.1. Logging in to CodeReady Workspaces on OpenShift for the first time using OAuth This section describes how to log in to CodeReady Workspaces on OpenShift for the first time using OAuth. Prerequisites Contact the administrator of the OpenShift instance to obtain the Red Hat CodeReady Workspaces URL . Procedure Navigate to the Red Hat CodeReady Workspaces URL to display the Red Hat CodeReady Workspaces login page. Choose the OpenShift OAuth option. The Authorize Access page is displayed. Click on the Allow selected permissions button. Update the account information: specify the Username , Email , First name and Last name fields and click the Submit button. Validation steps The browser displays the Red Hat CodeReady Workspaces Dashboard . 1.1.2. Logging in to CodeReady Workspaces on OpenShift for the first time registering as a new user This section describes how to log in to CodeReady Workspaces on OpenShift for the first time registering as a new user. Prerequisites Contact the administrator of the OpenShift instance to obtain the Red Hat CodeReady Workspaces URL . Self-registration is enabled. See Enabling self-registration . Procedure Navigate to the Red Hat CodeReady Workspaces URL to display the Red Hat CodeReady Workspaces login page. Choose the Register as a new user option. Update the account information: specify the Username , Email , First name and Last name field and click the Submit button. Validation steps The browser displays the Red Hat CodeReady Workspaces Dashboard . 1.1.3. Logging in to CodeReady Workspaces using crwctl This section describes how to log in to CodeReady Workspaces using the crwctl tool by copying login command from CodeReady Workspaces Dashboard. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc . The CodeReady Workspaces CLI management tool. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#using-the-chectl-management-tool.adoc . Red Hat CodeReady Workspaces Dashboard is opened in a browser. Procedure Using the upper-right corner of Dashboard, open the user's pop-up menu. Select the Copy crwctl login command option. Wait for the notification message The login command copied to clipboard to display. Paste the login command into a terminal and observe a successful login: 1.1.4. Finding CodeReady Workspaces cluster URL using the OpenShift 4 CLI This section describes how to obtain the CodeReady Workspaces cluster URL using the OpenShift 4 command line interface (CLI). The URL can be retrieved from the OpenShift logs or from the checluster Custom Resource. Prerequisites An instance of Red Hat CodeReady Workspaces running on OpenShift. User is located in a CodeReady Workspaces installation project. Procedure To retrieve the CodeReady Workspaces cluster URL from the checluster CR (Custom Resource), run: USD oc get checluster --output jsonpath='{.items[0].status.cheURL}' Alternatively, to retrieve the CodeReady Workspaces cluster URL from the OpenShift logs, run: USD oc logs --tail=10 `(oc get pods -o name | grep operator)` | \ grep "available at" | \ awk -F'available at: ' '{print USD2}' | sed 's/"//' 1.2. Importing certificates to browsers This section describes how to import a root certificate authority into a web browser to use CodeReady Workspaces with self-signed TLS certificates. When a TLS certificate is not trusted, the error message "Your CodeReady Workspaces server may be using a self-signed certificate. To resolve the issue, import the server CA certificate in the browser." blocks the login process. To prevent this, add the public part of the self-signed CA certificate into the browser after installing CodeReady Workspaces. 1.2.1. Adding certificates to Google Chrome on Linux or Windows Procedure Navigate to URL where CodeReady Workspaces is deployed. Save the certificate: Click the warning or open lock icon on the left of the address bar. Click Certificates and navigate to the Details tab. Select the top-level certificate, which is the needed Root certificate authority (do not export the unfolded certificate from the lower level), and export it: On Linux, click the Export button. On Windows, click the Save to file button. Go to Google Chrome Certificates settings in the Privacy and security section and navigate to the Authorities tab. Click the Import button and open the saved certificate file. Select Trust this certificate for identifying websites and click the OK button. After adding the CodeReady Workspaces certificate to the browser, the address bar displays the closed lock icon to the URL, indicating a secure connection. 1.2.2. Adding certificates to Google Chrome on macOS Procedure Navigate to URL where CodeReady Workspaces is deployed. Save the certificate: Click the lock icon on the left of the address bar. Click Certificates . Select the certificate to use and drag its displayed large icon to the desktop. Open the Keychain Access application. Select the System keychain and drag the saved certificate file to it. Double-click the imported CA, then go to Trust and select When using this certificate : Always Trust . Restart the browser for the added certificated to take effect. 1.2.3. Adding certificates to Mozilla Firefox Procedure Navigate to URL where CodeReady Workspaces is deployed. Save the certificate: Click the lock icon on the left of the address bar. Click the > button to the Connection not secure warning. Click the More information button. Click the View Certificate button on the Security tab. Select the second certificate tab. The certificate Common Name should start with ingress-operator Click the PEM (cert) link and save the certificate. Navigate to about:preferences , search for certificates , and click View Certificates . Go to the Authorities tab, click the Import button, and open the saved certificate file. Check Trust this CA to identify websites and click OK . Restart Mozilla Firefox for the added certificated to take effect. After adding the CodeReady Workspaces certificate to the browser, the address bar displays the closed lock icon to the URL, indicating a secure connection. 1.3. Accessing CodeReady Workspaces from OpenShift Developer Perspective The OpenShift Container Platform web console provides two perspectives; the Administrator perspective and the Developer perspective. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on OpenShift Container Platform by importing existing codebases, images, and Dockerfiles. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using CodeReady Workspaces. 1.3.1. OpenShift Developer Perspective integration with CodeReady Workspaces This section provides information about OpenShift Developer Perspective support for CodeReady Workspaces. When the CodeReady Workspaces Operator is deployed into OpenShift Container Platform 4.2 and later, it creates a ConsoleLink Custom Resource (CR). This adds an interactive link to the Red Hat Applications menu for accessing the CodeReady Workspaces installation using the OpenShift Developer Perspective console. To access the Red Hat Applications menu, click the three-by-three matrix icon on the main screen of the OpenShift web console. The CodeReady Workspaces Console Link , displayed in the drop-down menu, creates a new workspace or redirects the user to an existing one. Note OpenShift Container Platform console links are not created when CodeReady Workspaces is used with HTTP resources When installing CodeReady Workspaces with the From Git option, the OpenShift Developer Perspective console link is only created if CodeReady Workspaces is deployed with HTTPS. The console link will not be created if an HTTP resource is used. 1.3.2. Editing the code of applications running in OpenShift Container Platform using CodeReady Workspaces This section describes how to start editing the source code of applications running on OpenShift using CodeReady Workspaces. Prerequisites CodeReady Workspaces is deployed on the same OpenShift 4 cluster. Procedure Open the Topology view to list all projects. In the Select an Application search field, type workspace to list all workspaces. Click the workspace to edit. The deployments are displayed as graphical circles surrounded by circular buttons. One of these buttons is Edit Source Code . To edit the code of an application using CodeReady Workspaces, click the Edit Source Code button. This redirects to a workspace with the cloned source code of the application component. 1.3.3. Accessing CodeReady Workspaces from Red Hat Applications menu This section describes how to access CodeReady Workspaces workspaces from the Red Hat Applications menu on OpenShift Container Platform. Prerequisites The CodeReady Workspaces Operator is available in OpenShift 4. Procedure Open the Red Hat Applications menu by using the three-by-three matrix icon in the upper right corner of the main screen. The drop-down menu displays the available applications. Click the CodeReady Workspaces link to open the CodeReady Workspaces Dashboard.
[ "crwctl auth:login Successfully logged into <server> as <user>", "oc get checluster --output jsonpath='{.items[0].status.cheURL}'", "oc logs --tail=10 `(oc get pods -o name | grep operator)` | grep \"available at\" | awk -F'available at: ' '{print USD2}' | sed 's/\"//'" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/navigating-codeready-workspaces_crw
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1]
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. Type object Required source Property Type Description mirrors array (string) mirrors is one or more repositories that may also contain the same images. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies DELETE : delete collection of ImageContentSourcePolicy GET : list objects of kind ImageContentSourcePolicy POST : create an ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} DELETE : delete an ImageContentSourcePolicy GET : read the specified ImageContentSourcePolicy PATCH : partially update the specified ImageContentSourcePolicy PUT : replace the specified ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status GET : read status of the specified ImageContentSourcePolicy PATCH : partially update status of the specified ImageContentSourcePolicy PUT : replace status of the specified ImageContentSourcePolicy 13.2.1. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies HTTP method DELETE Description delete collection of ImageContentSourcePolicy Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentSourcePolicy Table 13.2. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentSourcePolicy Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 202 - Accepted ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.2. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method DELETE Description delete an ImageContentSourcePolicy Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentSourcePolicy Table 13.9. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentSourcePolicy Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentSourcePolicy Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.3. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method GET Description read status of the specified ImageContentSourcePolicy Table 13.16. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentSourcePolicy Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentSourcePolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1
Chapter 3. Understanding persistent storage
Chapter 3. Understanding persistent storage 3.1. Persistent storage overview Managing storage is a distinct problem from managing compute resources. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project. PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. Important High availability of storage in the infrastructure is left to the underlying storage provider. PVCs are defined by a PersistentVolumeClaim API object, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only. 3.2. Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle. 3.2.1. Provision storage In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV. Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the real storage that is available for use. PVs exist in the API and are available for use. 3.2.2. Bind claims When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other criteria. Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. 3.2.3. Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including persistentVolumeClaim in the pod's volumes block. Note If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 3.2.4. Storage Object in Use Protection The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. Storage Object in Use Protection is enabled by default. Note A PVC is in active use by a pod when a Pod object exists that uses the PVC. If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. 3.2.5. Release a persistent volume When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The claimant's data remains on the volume and must be handled according to policy. 3.2.6. Reclaim policy for persistent volumes The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Retain reclaim policy allows manual reclamation of the resource for those volume plugins that support it. Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. Dynamic provisioning is recommended for equivalent and better functionality. Delete reclaim policy deletes both the PersistentVolume object from OpenShift Container Platform and the associated storage asset in external infrastructure, such as Amazon Elastic Block Store (Amazon EBS) or VMware vSphere. Note Dynamically provisioned volumes are always deleted. 3.2.7. Reclaiming a persistent volume manually When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the claimant remains on the volume. Procedure To manually reclaim the PV as a cluster administrator: Delete the PV. USD oc delete pv <pv-name> The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition. The reclaimed PV is now available for use by another PVC. 3.2.8. Changing the reclaim policy of a persistent volume To change the reclaim policy of a persistent volume: List the persistent volumes in your cluster: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s Choose one of your persistent volumes and change its reclaim policy: USD oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Verify that your chosen persistent volume has the right policy: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s In the preceding output, the volume bound to claim default/claim3 now has a Retain reclaim policy. The volume will not be automatically deleted when a user deletes claim default/claim3 . 3.3. Persistent volumes Each PV contains a spec and status , which is the specification and status of the volume, for example: PersistentVolume object definition example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 ... status: ... 1 Name of the persistent volume. 2 The amount of storage available to the volume. 3 The access mode, defining the read-write and mount permissions. 4 The reclaim policy, indicating how the resource should be handled once it is released. You can view the name of a PVC that is bound to a PV by running the following command: USD oc get pv <pv-name> -o jsonpath='{.spec.claimRef.name}' 3.3.1. Types of PVs OpenShift Container Platform supports the following persistent volume plugins: AWS Elastic Block Store (EBS) AWS Elastic File Store (EFS) Azure Disk Azure File Cinder Fibre Channel GCP Persistent Disk GCP Filestore IBM Power Virtual Server Block IBM Cloud(R) VPC Block HostPath iSCSI Local volume NFS OpenStack Manila Red Hat OpenShift Data Foundation CIFS/SMB VMware vSphere 3.3.2. Capacity Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the capacity attribute of the PV. Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on. 3.3.3. Access modes A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim's access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO. Direct matches are always attempted first. The volume's modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another. All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches. Important Volume access modes describe volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource. Errors in the provider show up at runtime as mount errors. For example, NFS offers ReadWriteOnce access mode. If you want to use the volume's ROX capability, mark the claims as ReadOnlyMany . iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, delete the pods that use the volumes. The following table lists the access modes: Table 3.1. Access modes Access Mode CLI abbreviation Description ReadWriteOnce RWO The volume can be mounted as read-write by a single node. ReadWriteOncePod [1] RWOP The volume can be mounted as read-write by a single pod on a single node. ReadOnlyMany ROX The volume can be mounted as read-only by many nodes. ReadWriteMany RWX The volume can be mounted as read-write by many nodes. RWOP uses the SELinux mount feature. This feature is driver dependent, and enabled by default in ODF, AWS EBS, Azure Disk, GCP PD, IBM Cloud Block Storage volume, Cinder, and vSphere. For third-party drivers, please contact your storage vendor. Table 3.2. Supported access modes for persistent volumes Volume plugin ReadWriteOnce [1] ReadWriteOncePod ReadOnlyMany ReadWriteMany AWS EBS [2] ✅ ✅ AWS EFS ✅ ✅ ✅ ✅ Azure File ✅ ✅ ✅ ✅ Azure Disk ✅ ✅ CIFS/SMB ✅ ✅ ✅ ✅ Cinder ✅ ✅ Fibre Channel ✅ ✅ ✅ ✅ [3] GCP Persistent Disk ✅ ✅ GCP Filestore ✅ ✅ ✅ ✅ HostPath ✅ ✅ IBM Power Virtual Server Disk ✅ ✅ ✅ ✅ IBM Cloud(R) VPC Disk ✅ ✅ iSCSI ✅ ✅ ✅ ✅ [3] Local volume ✅ ✅ LVM Storage ✅ ✅ NFS ✅ ✅ ✅ ✅ OpenStack Manila ✅ ✅ Red Hat OpenShift Data Foundation ✅ ✅ ✅ VMware vSphere ✅ ✅ ✅ [4] ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. Use a recreate deployment strategy for pods that rely on AWS EBS. Only raw block volumes support the ReadWriteMany (RWX) access mode for Fibre Channel and iSCSI. For more information, see "Block volume support". If the underlying vSphere environment supports the vSAN file service, then the vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information, see "Using Container Storage Interface" "VMware vSphere CSI Driver Operator". 3.3.4. Phase Volumes can be found in one of the following phases: Table 3.3. Volume phases Phase Description Available A free resource not yet bound to a claim. Bound The volume is bound to a claim. Released The claim was deleted, but the resource is not yet reclaimed by the cluster. Failed The volume has failed its automatic reclamation. 3.3.4.1. Last phase transition time The LastPhaseTransitionTime field has a timestamp that updates every time a persistent volume (PV) transitions to a different phase ( pv.Status.Phase ). To find the time of the last phase transition for a PV, run the following command: USD oc get pv <pv-name> -o json | jq '.status.lastPhaseTransitionTime' 1 1 Specify the name of the PV that you want to see the last phase transition. 3.3.4.2. Mount options You can specify mount options while mounting a PV by using the attribute mountOptions . For example: Mount options example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default 1 Specified mount options are used while mounting the PV to the disk. The following PV types support mount options: AWS Elastic Block Store (EBS) Azure Disk Azure File Cinder GCE Persistent Disk iSCSI Local volume NFS Red Hat OpenShift Data Foundation (Ceph RBD only) CIFS/SMB VMware vSphere Note Fibre Channel and HostPath PVs do not support mount options. Additional resources ReadWriteMany vSphere volume support 3.4. Persistent volume claims Each PersistentVolumeClaim object contains a spec and status , which is the specification and status of the persistent volume claim (PVC), for example: PersistentVolumeClaim object definition example kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status: ... 1 Name of the PVC. 2 The access mode, defining the read-write and mount permissions. 3 The amount of storage available to the PVC. 4 Name of the StorageClass required by the claim. 3.4.1. Storage classes Claims can optionally request a specific storage class by specifying the storage class's name in the storageClassName attribute. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. The cluster administrator can configure dynamic provisioners to service one or more storage classes. The cluster administrator can create a PV on demand that matches the specifications in the PVC. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to "" to be bound to a PV without a storage class. Note If more than one storage class is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one storage class should be set as the default. 3.4.2. Access modes Claims use the same conventions as volumes when requesting storage with specific access modes. 3.4.3. Resources Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. 3.4.4. Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is mounted to the host and into the pod, for example: Mount volume to the host and into the pod example kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3 1 Path to mount the volume inside the pod. 2 Name of the volume to mount. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 Name of the PVC, that exists in the same namespace, to use. 3.5. Block volume support OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification. Important Pods using raw block volumes must be configured to allow privileged containers. The following table displays which volume plugins support block volumes. Table 3.4. Block volume support Volume Plugin Manually provisioned Dynamically provisioned Fully supported Amazon Elastic Block Store (Amazon EBS) ✅ ✅ ✅ Amazon Elastic File Storage (Amazon EFS) Azure Disk ✅ ✅ ✅ Azure File Cinder ✅ ✅ ✅ Fibre Channel ✅ ✅ GCP ✅ ✅ ✅ HostPath IBM Cloud Block Storage volume ✅ ✅ ✅ iSCSI ✅ ✅ Local volume ✅ ✅ LVM Storage ✅ ✅ ✅ NFS Red Hat OpenShift Data Foundation ✅ ✅ ✅ CIFS/SMB ✅ ✅ ✅ VMware vSphere ✅ ✅ ✅ Important Using any of the block volumes that can be provisioned manually, but are not provided as fully supported, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.5.1. Block volume examples PV example apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false 1 volumeMode must be set to Block to indicate that this PV is a raw block volume. PVC example apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi 1 volumeMode must be set to Block to indicate that a raw block PVC is requested. Pod specification example apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3 1 volumeDevices , instead of volumeMounts , is used for block devices. Only PersistentVolumeClaim sources can be used with raw block volumes. 2 devicePath , instead of mountPath , represents the path to the physical device where the raw block is mapped to the system. 3 The volume source must be of type persistentVolumeClaim and must match the name of the PVC as expected. Table 3.5. Accepted values for volumeMode Value Default Filesystem Yes Block No Table 3.6. Binding scenarios for block volumes PV volumeMode PVC volumeMode Binding result Filesystem Filesystem Bind Unspecified Unspecified Bind Filesystem Unspecified Bind Unspecified Filesystem Bind Block Block Bind Unspecified Block No Bind Block Unspecified No Bind Filesystem Block No Bind Block Filesystem No Bind Important Unspecified values result in the default value of Filesystem . 3.6. Using fsGroup to reduce pod timeouts If a storage volume contains many files (~1,000,000 or greater), you may experience pod timeouts. This can occur because, by default, OpenShift Container Platform recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a pod's securityContext when that volume is mounted. For large volumes, checking and changing ownership and permissions can be time consuming, slowing pod startup. You can use the fsGroupChangePolicy field inside a securityContext to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a pod. This field only applies to volume types that support fsGroup -controlled ownership and permissions. This field has two possible values: OnRootMismatch : Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This can help shorten the time it takes to change ownership and permission of a volume to reduce pod timeouts. Always : Always change permission and ownership of the volume when a volume is mounted. fsGroupChangePolicy example securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: "OnRootMismatch" 1 ... 1 OnRootMismatch specifies skipping recursive permission change, thus helping to avoid pod timeout problems. Note The fsGroupChangePolicyfield has no effect on ephemeral volume types, such as secret, configMap, and emptydir.
[ "oc delete pv <pv-name>", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:", "oc get pv <pv-name> -o jsonpath='{.spec.claimRef.name}'", "oc get pv <pv-name> -o json | jq '.status.lastPhaseTransitionTime' 1", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3", "apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi", "apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3", "securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage/understanding-persistent-storage
Chapter 7. Configuring the systems and running tests using RHCert CLI Tool
Chapter 7. Configuring the systems and running tests using RHCert CLI Tool To complete the certification process using CLI, you must prepare the host under test (HUT) and test server, run the tests, and retrieve the test results. 7.1. Using the test plan to prepare the host under test for testing Running the provision command performs a number of operations, such as setting up passwordless SSH communication with the test server, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware or software packages will be installed if the test plan is designed for certifying a hardware or a software product. Prerequisites You have the hostname or the IP address of the test server. Procedure Run the provision command in either way. The test plan will automatically get downloaded to your system. If you have already downloaded the test plan: Replace <path_to_test_plan_document> with the test plan file saved on your system. Follow the on-screen instructions. If you have not downloaded the test plan: Follow the on-screen instructions and enter your Certification ID when prompted. When prompted, provide the hostname or the IP address of the test server to set up passwordless SSH. You are prompted only the first time you add a new system. 7.2. Using the test plan to prepare the test server for testing Running the Provision command enables and starts the rhcertd service, which configures services specified in the test suite on the test server, such as iperf for network testing, and an nfs mount point used in kdump testing. Prerequisites You have the hostname or IP address of the host under test. Procedure Run the provision command by defining the role, "test server", to the system you are adding. This is required only for provisioning the test server. Replace <path_to_test_plan_document> with the test plan file saved on your system. 7.3. Running the certification tests using CLI Procedure Run the following command: When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . Note After a test reboot, rhcert is running in the background to verify the image. Use tail -f / var /log/rhcert/RedHatCertDaemon.log to see the current progress and status of the verification. 7.4. Submitting the test results file Procedure Log in to authenticate your device. Note Logging in is mandatory to submit the test results file. Open the generated URL in a new browser window or tab. Enter the login and password and click Log in . Click Grant access . Device log in successful message displays. Return to the terminal and enter yes to the Please confirm once you grant access prompt. Submit the result file. When prompted, enter your Certification ID.
[ "rhcert-provision <path_to_test_plan_document>", "rhcert-provision", "rhcert-provision --role test-server <path_to_test_plan_document>", "rhcert-run", "rhcert-cli login", "rhcert-submit" ]
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly_configuring-the-hosts-and-running-tests-by-using-cli_hw-test-suite-configure-hosts-run-tests-use-cockpit
Chapter 11. Network configuration
Chapter 11. Network configuration This section describes the basics of network configuration using the Assisted Installer. 11.1. Cluster networking There are various network types and addresses used by OpenShift and listed in the table below. Type DNS Description clusterNetwork The IP address pools from which Pod IP addresses are allocated. serviceNetwork The IP address pool for services. machineNetwork The IP address blocks for machines forming the cluster. apiVIP api.<clustername.clusterdomain> The VIP to use for API communication. This setting must either be provided or pre-configured in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. apiVIPs api.<clustername.clusterdomain> The VIPs to use for API communication. This setting must either be provided or pre-configured in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting. ingressVIP *.apps.<clustername.clusterdomain> The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. ingressVIPs *.apps.<clustername.clusterdomain> The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting. Note OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept multiple IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration using the API. Depending on the desired network stack, you can choose different network controllers. Currently, the Assisted Service can deploy OpenShift Container Platform clusters using one of the following configurations: IPv4 IPv6 Dual-stack (IPv4 + IPv6) Supported network controllers depend on the selected stack and are summarized in the table below. For a detailed Container Network Interface (CNI) network provider feature comparison, refer to the OCP Networking documentation . Stack SDN OVN IPv4 Yes Yes IPv6 No Yes Dual-stack No Yes Note OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. 11.1.1. Limitations 11.1.1.1. SDN With Single Node OpenShift (SNO), the SDN controller is not supported. The SDN controller does not support IPv6. 11.1.1.2. OVN-Kubernetes Please see the OVN-Kubernetes limitations section in the OCP documentation . 11.1.2. Cluster network The cluster network is a network from which every Pod deployed in the cluster gets its IP address. Given that the workload may live across many nodes forming the cluster, it's important for the network provider to be able to easily find an individual node based on the Pod's IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix . The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster may assign addresses for the multi-node cluster: --- clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 --- Creating a 3-node cluster using the snippet above may create the following network topology: Pods scheduled in node #1 get IPs from 10.128.0.0/23 Pods scheduled in node #2 get IPs from 10.128.2.0/23 Pods scheduled in node #3 get IPs from 10.128.4.0/23 Explaining OVN-K8s internals is out of scope for this document, but the pattern described above provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes. 11.1.3. Machine network The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs. 11.1.4. SNO compared to multi-node cluster Depending on whether you are deploying a Single Node OpenShift or a multi-node cluster, different values are mandatory. The table below explains this in more detail. Parameter SNO Multi-Node Cluster with DHCP mode Multi-Node Cluster without DHCP mode clusterNetwork Required Required Required serviceNetwork Required Required Required machineNetwork Auto-assign possible (*) Auto-assign possible (*) Auto-assign possible (*) apiVIP Forbidden Forbidden Required apiVIPs Forbidden Forbidden Required in 4.12 and later releases ingressVIP Forbidden Forbidden Required ingressVIPs Forbidden Forbidden Required in 4.12 and later releases (*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly. 11.1.5. Air-gapped environments The workflow for deploying a cluster without Internet access has some prerequisites which are out of scope of this document. You may consult the Zero Touch Provisioning the hard way Git repository for some insights. 11.2. DHCP VIP allocation The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server. If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network. Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. 11.2.1. Example payload to enable autoallocation --- { "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] } --- 11.2.2. Example payload to disable autoallocation --- { "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] } --- 11.3. Additional resources Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses. 11.4. Understanding differences between User Managed Networking and Cluster Managed Networking User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include: Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses. Deployments with cluster nodes distributed across many distinct L2 network segments. 11.4.1. Validations There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change: L3 connectivity check (ICMP) is performed instead of L2 check (ARP) 11.5. Static network configuration You may use static network configurations when generating or updating the discovery ISO. 11.5.1. Prerequisites You are familiar with NMState . 11.5.2. NMState configuration The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time. 11.5.2.1. Example of NMState configuration --- dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.126.1 -hop-interface: eth0 table-id: 254 --- 11.5.3. MAC interface mapping MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host. The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces. 11.5.3.1. Example of MAC interface mapping --- mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] --- 11.5.4. Additional NMState configuration examples The examples below are only meant to show a partial configuration. They are not meant to be used as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they may leave your machines with no network connectivity. 11.5.4.1. Tagged VLAN --- interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 --- 11.5.4.2. Network bond --- interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: all_slaves_active: delivered miimon: "140" slaves: - eth0 - eth1 name: bond0 state: up type: bond --- 11.6. Applying a static network configuration with the API You can apply a static network configuration using the Assisted Installer API. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the UI. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml . Procedure Create a temporary file /tmp/request-body.txt with the API request: --- jq -n --arg NMSTATE_YAML1 "USD(cat server-a.yaml)" --arg NMSTATE_YAML2 "USD(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": USDNMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": USDNMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt --- Refresh the API token: USD source refresh-token Send the request to the Assisted Service API endpoint: --- curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID --- 11.7. Additional resources Applying a static network configuration with the UI 11.8. Converting to dual-stack networking Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets. 11.8.1. Prerequisites You are familiar with OVN-K8s documentation 11.8.2. Example payload for Single Node OpenShift --- { "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } --- 11.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes --- { "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } --- 11.8.4. Limitations The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values. 11.9. Additional resources Understanding OpenShift networking OpenShift SDN - CNI network provider OVN-Kubernetes - CNI network provider Dual-stack Service configuration scenarios Installing on bare metal OCP . Cluster Network Operator configuration .
[ "--- clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 ---", "--- { \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] } ---", "--- { \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] } ---", "--- dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254 ---", "--- mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] ---", "--- interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 ---", "--- interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: all_slaves_active: delivered miimon: \"140\" slaves: - eth0 - eth1 name: bond0 state: up type: bond ---", "--- jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt ---", "source refresh-token", "--- curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID ---", "--- { \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] } ---", "--- { \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] } ---" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/assisted_installer_for_openshift_container_platform/assembly_network-configuration
18.19. The Configuration Menu and Progress Screen
18.19. The Configuration Menu and Progress Screen Once you click Begin Installation at the Installation Summary screen, the progress screen appears. Red Hat Enterprise Linux reports the installation progress on the screen as it writes the selected packages to your system. Figure 18.37. Installing Packages For your reference, a complete log of your installation can be found in the /var/log/anaconda/anaconda.packaging.log file, once you reboot your system. If you chose to encrypt one or more partitions during partitioning setup, a dialog window with a progress bar will be displayed during the early stage of the installation process. This window informs that the installer is attempting to gather enough entropy (random data) to ensure that the encryption is secure. This window will disappear after 256 bits of entropy are gathered, or after 10 minutes. You can speed up the gathering process by moving your mouse or randomly typing on the keyboard. After the window disappears, the installation process will continue. Figure 18.38. Gathering Entropy for Encryption While the packages are being installed, more configuration is required. Above the installation progress bar are the Root Password and User Creation menu items. The Root Password screen is used to configure the system's root account. This account can be used to perform critical system management and administration tasks. The same tasks can also be performed with a user account with the wheel group membership; if such an user account is created during installation, setting up a root password is not mandatory. Creating a user account is optional and can be done after installation, but it is recommended to do it on this screen. A user account is used for normal work and to access the system. Best practice suggests that you always access the system through a user account, not the root account. It is possible to disable access to the Root Password or Create User screens. To do so, use a Kickstart file which includes the rootpw --lock or user --lock commands. See Section 27.3.1, "Kickstart Commands and Options" for more information these commands. 18.19.1. Set the Root Password Setting up a root account and password is an important step during your installation. The root account (also known as the superuser) is used to install packages, upgrade RPM packages, and perform most system maintenance. The root account gives you complete control over your system. For this reason, the root account is best used only to perform system maintenance or administration. See the Red Hat Enterprise Linux 7 System Administrator's Guide for more information about becoming root. Figure 18.39. Root Password Screen Note You must always set up at least one way to gain root privileges to the installed system: either using a root account, or by creating a user account with administrative privileges (member of the wheel group), or both. Click the Root Password menu item and enter your new password into the Root Password field. Red Hat Enterprise Linux displays the characters as asterisks for security. Type the same password into the Confirm field to ensure it is set correctly. After you set the root password, click Done to return to the User Settings screen. The following are the requirements and recommendations for creating a strong root password: must be at least eight characters long may contain numbers, letters (upper and lower case) and symbols is case-sensitive and should contain a mix of cases something you can remember but that is not easily guessed should not be a word, abbreviation, or number associated with you, your organization, or found in a dictionary (including foreign languages) should not be written down; if you must write it down keep it secure Note To change your root password after you have completed the installation, run the passwd command as root . If you forget the root password, see Section 32.1.3, "Resetting the Root Password" for instructions on how to use the rescue mode to set a new one. 18.19.2. Create a User Account To create a regular (non-root) user account during the installation, click User Settings on the progress screen. The Create User screen appears, allowing you to set up the regular user account and configure its parameters. Though recommended to do during installation, this step is optional and can be performed after the installation is complete. Note You must always set up at least one way to gain root privileges to the installed system: either using a root account, or by creating a user account with administrative privileges (member of the wheel group), or both. To leave the user creation screen after you have entered it, without creating a user, leave all the fields empty and click Done . Figure 18.40. User Account Configuration Screen Enter the full name and the user name in their respective fields. Note that the system user name must be shorter than 32 characters and cannot contain spaces. It is highly recommended to set up a password for the new account. When setting up a strong password even for a non-root user, follow the guidelines described in Section 18.19.1, "Set the Root Password" . Click the Advanced button to open a new dialog with additional settings. Figure 18.41. Advanced User Account Configuration By default, each user gets a home directory corresponding to their user name. In most scenarios, there is no need to change this setting. You can also manually define a system identification number for the new user and their default group by selecting the check boxes. The range for regular user IDs starts at the number 1000 . At the bottom of the dialog, you can enter the comma-separated list of additional groups, to which the new user shall belong. The new groups will be created in the system. To customize group IDs, specify the numbers in parenthesis. Note Consider setting IDs of regular users and their default groups at range starting at 5000 instead of 1000 . That is because the range reserved for system users and groups, 0 - 999 , might increase in the future and thus overlap with IDs of regular users. For creating users with custom IDs using kickstart, see user (optional) . For changing the minimum UID and GID limits after the installation, which ensures that your chosen UID and GID ranges are applied automatically on user creation, see the Users and Groups chapter of the System Administrator's Guide . Once you have customized the user account, click Save Changes to return to the User Settings screen.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-configuration-progress-menu-s390
Chapter 4. Adding RHEL metering to an AWS integration
Chapter 4. Adding RHEL metering to an AWS integration If you converted from a compatible third-party Linux distribution to Red Hat Enterprise Linux (RHEL) and purchased the RHEL for third party migration listing in Amazon Web Services (AWS), you can add RHEL metering to an AWS integration. With RHEL metering, Red Hat processes your bill to meter your hourly RHEL usage associated with a Red Hat offering in AWS. Procedure In AWS, tag your instances of RHEL that you want to meter. For more information about tagging your instances of RHEL in AWS, see Adding tags to an AWS resource . From Red Hat Hybrid Cloud Console , click Settings . Click Integrations . Click the more options menu for your integration. Click Edit . In Metered Product , select Red Hat Enterprise Linux from the drop-down to activate metering.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_amazon_web_services_aws_data_into_cost_management/updating-aws-int-for-rhel-metering_updating-int
Chapter 8. Directory Server in Red Hat Enterprise Linux
Chapter 8. Directory Server in Red Hat Enterprise Linux About Directory Server for Red Hat Enterprise Linux This section describes changes in the main server component for Red Hat Directory Server - the 389-ds-base package, which includes the LDAP server itself and command line utilities and scripts for its administration. This package is part of the Red Hat Enterprise Linux base subscription channel and therefore available on all Red Hat Enterprise Linux Server systems due to Red Hat Identity Management components which depend on it. Additional Red Hat Directory Server components, such as the Directory Server Console , are available in the rhel-x86_64-server-6-rhdirserv-9 additional subscription channel. A subscription to this channel is also required to obtain support for Red Hat Directory Server. Changes to the additional components in this channel are not described in this document. Red Hat Directory Server version 9 is available for Red Hat Enterprise Linux 6. See https://access.redhat.com/products/red-hat-directory-server/get-started-v9 for information about getting started with Directory Server 9, and https://access.redhat.com/documentation/en/red-hat-directory-server/?version=9 for full documentation. (BZ#1333801) Large amounts of skipped updates in fractional replication no longer cause performance loss During fractional replication, if a large number of skipped updates was present, the supplier could previously acquire a replica for a long time and fail to update the Replica Update Vector (RUV) at the end of the session. This then caused the session to evaluate the same skipped updates, resulting in poor performance. This bug has been fixed by adding a system subentry which is occasionally updated even if there are no applicable changes to be replicated, and the problem no longer occurs. (BZ#1259383) Fixed a crash while trimming the retro changelog When trimming the retro changelog ( retroCL ), entries are first deleted from the changelog itself and then also from the cache. The 389-ds-base server was, however, missing a check to verify that the entries are actually present in the cache, which could lead to the server attempting to delete nonexistent entries and subsequently crash on systems where not all changelog entries could fit in the cache due to its small size. A check has been added to make sure only entries actually present in the cache are being deleted, and the server no longer crashes when trimming the retro changelog. (BZ# 1244970 ) Fixed a crash in the backend add function When a callback at BE_TXN in the backend add function failed on a cached entry, the function was attempting to free the entry twice instead of removing it from the cache and then freeing it. This update adds remove and free code to the backend add function and the function no longer attempts to free cached entries twice. (BZ# 1265851 ) 389-ds-base server no longer crashes when attempting to replace a nonexistent attribute When a replace operation for a nonexistent attribute was performed without providing new values, the entry was stored with incorrect metadata: an empty deleted value without an attribute deletion change state number (CSN). This entry could then result in memory corruption and cause the server to terminate unexpectedly. To fix this bug, additional space to store metadata is now allocated and the server no longer crashes in this scenario. (BZ#1298496) 389-ds-base no longer hangs due to modified entry remaining locked During a modify operation, the modified entry is inserted into entry cache and locked until the modified entry is returned. In cases where the entry is removed from the entry cache after it is committed but before the return operation, the modified entry previously remained locked, and any subsequent write operations on the same entry then caused the server to hang. This bug has been fixed by adding a flag so that the entry can be unlocked whether it is present in the entry cache or not, and the server no longer hangs in this situation. (BZ# 1273552 ) Fixed a deadlock during backend deletion in Directory Server Previously, transaction information was not passed to one of the database helper functions during backend deletion. This could result in a deadlock if a plug-in attempted to access data in the area locked by the transaction. With this update, transaction info is passed to all necessary database helper functions, and a deadlock no longer occurs in the described situation. (BZ# 1278585 ) ns-slapd no longer crashes on multiple asynchronous searches if a request is abandoned When multiple simple paged results searches were requested asynchronously in a persistent connection and one of the requests was abandoned, contention among the asynchronous requests could occur and cause the ns-slapd service to crash. This bug has been fixed and ns-slapd no longer crashes due to abandoned requests. (BZ#1247792) Simple paged results slots are now being correctly released after search failure Previously, if a simple paged results search failed in the Directory Server backend, its slot was not released, which caused the connection object to accumulate unreleased slots over time. This problem has been fixed, and slots are now correctly released in the event of a search failure. (BZ# 1290243 ) ns-slapd no longer crashes when freeing a search results object Previously, when Directory Server freed a search results object, there was a brief period of time before the freed information was set to the pagedresults handle. If the paged-results handle was released due to a timeout in during this period, a double free event occured, causing ns-slapd to crash. This problem has been eliminated and double free no longer occurs when freeing search results objects. (BZ# 1267296 ) Fixed a deadlock in asynchronous simple paged results requests A fix to deadlock in the asynchronous simple paged results requests caused another self deadlock due to a regression. To address this problem, a simple PR_Lock on a connection object has been replaced with a re-entrant PR_Monitor . As a result, the deadlock no longer occurs. (BZ# 1296694 ) Deletion of attributes without a value on the master server now replicates correctly Previously, when an attribute which does not have a value on the master server was deleted, the deletion was not replicated to other servers. The regression that caused this bug has been fixed and the change now replicates as expected. (BZ# 1251288 ) Directory Server no longer logs false attrlist_replace errors Previously, Directory Server could in some circumstances repeatedly log attrlist_replace error messages in error. This problem was caused by memory corruption due to a wrong memory copy function being used. The memory copy function has been replaced with memmove , which prevents this case memory corruption, and the server no longer logs these error messages. (BZ# 1267405 ) cleanAllRUV now clears the changelog completely Previously, after the cleanAllRUV task finished, the changelog still contained entries from the cleaned rid . As a consequence, the RUV could contain undesirable data, and the RUV element could be missing the replica URL. Now, cleanAllRUV cleans changelog completely as expected. (BZ# 1270002 ) Replication failures no longer result in missing changes after additional updates Previously, if a replicated update failed on the consumer side, it was never retried due to a bug in the replication asynchronous result thread which caused it to miss the failure before another update was replicated successfully. The second update also updated the consumer Replica Update Vector (RUV), and the first (failed) update was lost. In this release, replication failures cause the connection to close, stopping the replication session and preventing any subsequent updates from updating the consumer RUV, which allows the supplier to retry the operation in the replication session. No updates are therefore lost. (BZ# 1294770 ) Unnecessary keep alive entries no longer cause missing replication Previously, a keep alive entry was being created at too many opportunities during replication, potentially causing a race condition when adding the entry to the replica changelog and resulting in operations being dropped from the replication. With this update, unnecessary keep alive entry creation has been eliminated, and missing replication no longer occurs. (BZ# 1307152 ) nsMatchingRule is now correctly applied to attribute information Previously, when nsMatchingRule was dynamically updated in an index entry, the value was not applied to the attribute information. This caused the dbverify utility to report database corruption in error. In this release, nsMatchingRule changes are correctly applied to attribute information, and dbverify no longer falsely reports database corruption. (BZ# 1236656 ) Tombstone entries no longer create unnecessary index entries When an entry is deleted, its indexed attribute values are also removed from each index file. However, if the entry is turned into a tombstone entry, reindexing previously added the removed attribute value back into the index. This bug has been fixed, and index files no longer contain unnecessary key-value pairs generated by tombstone entries. (BZ# 1255290 ) Index is now updated properly when several values of the same attribute are deleted Previously, when several values of the same attribute were deleted using the ldapmodify command, and at least one of them was added again during the same operation, the equality index was not updated. As a consequence, an exact search for the re-added attribute value did not return the entry. The logic of the index code has been modified to update the index if at least one of the values in the entry changes, and the exact search for the re-added attribute value now returns the correct entry. (BZ#1282457) COS cache now correctly adds all definitions A bug fix related to the Class of Service (COS) object cache introduced a regression which caused it to stop adding definitions after the first one, instead of adding all definitions. This problem has been fixed and the COS cache now correctly adds all definitions as designed. (BZ# 1259546 ) Improved ACL performance Previously, unnecessarily complicated regular expressions were used in the Access Control List (ACL) implementation in Directory Server. These regular expressions have been removed and the ACL implementation reworked, resulting in improved performance. (BZ# 1236156 ) ntUserlastLogon and ntUserlastLogoff attributes are now synchronized between Directory Server and Active Directory Previously, WinSync account synchronization could not update the ntUserlastLogon and ntUserlastLogoff attributes in Directory Server when synchronizing with Active Directory. This bug has been fixed and these attributes are now being updated correctly based on the lastLogonTimestamp and lastLogoffTimestamp attributes in Active Directory. (BZ#1245237)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_directory_server_in_red_hat_enterprise_linux
Chapter 14. FlowMetric configuration parameters
Chapter 14. FlowMetric configuration parameters FlowMetric is the API allowing to create custom metrics from the collected flow logs. 14.1. FlowMetric [flows.netobserv.io/v1alpha1] Description FlowMetric is the API allowing to create custom metrics from the collected flow logs. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowMetricSpec defines the desired state of FlowMetric The provided API allows you to customize these metrics according to your needs. When adding new metrics or modifying existing labels, you must carefully monitor the memory usage of Prometheus workloads as this could potentially have a high impact. Cf https://rhobs-handbook.netlify.app/products/openshiftmonitoring/telemetry.md/#what-is-the-cardinality-of-a-metric To check the cardinality of all Network Observability metrics, run as promql : count({ name =~"netobserv.*"}) by ( name ) . 14.1.1. .metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object 14.1.2. .spec Description FlowMetricSpec defines the desired state of FlowMetric The provided API allows you to customize these metrics according to your needs. When adding new metrics or modifying existing labels, you must carefully monitor the memory usage of Prometheus workloads as this could potentially have a high impact. Cf https://rhobs-handbook.netlify.app/products/openshiftmonitoring/telemetry.md/#what-is-the-cardinality-of-a-metric To check the cardinality of all Network Observability metrics, run as promql : count({ name =~"netobserv.*"}) by ( name ) . Type object Required metricName type Property Type Description buckets array (string) A list of buckets to use when type is "Histogram". The list must be parsable as floats. When not set, Prometheus default buckets are used. charts array Charts configuration, for the OpenShift Container Platform Console in the administrator view, Dashboards menu. direction string Filter for ingress, egress or any direction flows. When set to Ingress , it is equivalent to adding the regular expression filter on FlowDirection : 0|2 . When set to Egress , it is equivalent to adding the regular expression filter on FlowDirection : 1|2 . divider string When nonzero, scale factor (divider) of the value. Metric value = Flow value / Divider. filters array filters is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must be used to eliminate duplicates: Duplicate != "true" and FlowDirection = "0" . Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . flatten array (string) flatten is a list of list-type fields that must be flattened, such as Interfaces and NetworkEvents. Flattened fields generate one metric per item in that field. For instance, when flattening Interfaces on a bytes counter, a flow having Interfaces [br-ex, ens5] increases one counter for br-ex and another for ens5 . labels array (string) labels is a list of fields that should be used as Prometheus labels, also known as dimensions. From choosing labels results the level of granularity of this metric, and the available aggregations at query time. It must be done carefully as it impacts the metric cardinality (cf https://rhobs-handbook.netlify.app/products/openshiftmonitoring/telemetry.md/#what-is-the-cardinality-of-a-metric ). In general, avoid setting very high cardinality labels such as IP or MAC addresses. "SrcK8S_OwnerName" or "DstK8S_OwnerName" should be preferred over "SrcK8S_Name" or "DstK8S_Name" as much as possible. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . metricName string Name of the metric. In Prometheus, it is automatically prefixed with "netobserv_". remap object (string) Set the remap property to use different names for the generated metric labels than the flow fields. Use the origin flow fields as keys, and the desired label names as values. type string Metric type: "Counter" or "Histogram". Use "Counter" for any value that increases over time and on which you can compute a rate, such as Bytes or Packets. Use "Histogram" for any value that must be sampled independently, such as latencies. valueField string valueField is the flow field that must be used as a value for this metric. This field must hold numeric values. Leave empty to count flows rather than a specific value per flow. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . 14.1.3. .spec.charts Description Charts configuration, for the OpenShift Container Platform Console in the administrator view, Dashboards menu. Type array 14.1.4. .spec.charts[] Description Configures charts / dashboard generation associated to a metric Type object Required dashboardName queries title type Property Type Description dashboardName string Name of the containing dashboard. If this name does not refer to an existing dashboard, a new dashboard is created. queries array List of queries to be displayed on this chart. If type is SingleStat and multiple queries are provided, this chart is automatically expanded in several panels (one per query). sectionName string Name of the containing dashboard section. If this name does not refer to an existing section, a new section is created. If sectionName is omitted or empty, the chart is placed in the global top section. title string Title of the chart. type string Type of the chart. unit string Unit of this chart. Only a few units are currently supported. Leave empty to use generic number. 14.1.5. .spec.charts[].queries Description List of queries to be displayed on this chart. If type is SingleStat and multiple queries are provided, this chart is automatically expanded in several panels (one per query). Type array 14.1.6. .spec.charts[].queries[] Description Configures PromQL queries Type object Required legend promQL top Property Type Description legend string The query legend that applies to each timeseries represented in this chart. When multiple timeseries are displayed, you should set a legend that distinguishes each of them. It can be done with the following format: {{ Label }} . For example, if the promQL groups timeseries per label such as: sum(rate(USDMETRIC[2m])) by (Label1, Label2) , you might write as the legend: Label1={{ Label1 }}, Label2={{ Label2 }} . promQL string The promQL query to be run against Prometheus. If the chart type is SingleStat , this query should only return a single timeseries. For other types, a top 7 is displayed. You can use USDMETRIC to refer to the metric defined in this resource. For example: sum(rate(USDMETRIC[2m])) . To learn more about promQL , refer to the Prometheus documentation: https://prometheus.io/docs/prometheus/latest/querying/basics/ top integer Top N series to display per timestamp. Does not apply to SingleStat chart type. 14.1.7. .spec.filters Description filters is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must be used to eliminate duplicates: Duplicate != "true" and FlowDirection = "0" . Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . Type array 14.1.8. .spec.filters[] Description Type object Required field matchType Property Type Description field string Name of the field to filter on matchType string Type of matching to apply value string Value to filter on. When matchType is Equal or NotEqual , you can use field injection with USD(SomeField) to refer to any other field of the flow.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/flowmetric-api
Chapter 9. Installing on a single node
Chapter 9. Installing on a single node 9.1. Preparing to install on a single node 9.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 9.1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is deprecated. OVN-Kubernetes is the default networking solution for single-node OpenShift deployments. 9.1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. Bare metal installation: Installing OpenShift Container Platform on a single node on bare metal requires that you specify the platform.none: {} parameter in the install-config.yaml configuration file. Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 9.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPU cores 32GB of RAM 120GB Note One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, Ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 9.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster. Without persistent IP addresses, communications between the apiserver and etcd might fail. 9.2. Installing OpenShift on a single node 9.2.1. Generating the discovery ISO with the Assisted Installer Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer (AI) can generate with the cluster name, base domain, Secure Shell (SSH) public key, and pull secret. Procedure On the administration node, open a browser and navigate to Install OpenShift with the Assisted Installer . Click Create Cluster to create a new cluster. In the Cluster name field, enter a name for the cluster. In the Base domain field, enter a base domain. For example: All DNS records must be subdomains of this base domain and include the cluster name. You cannot change the base domain after cluster installation. For example: Select Install single node OpenShift (SNO) . Read the 4.9 release notes, which outline some of the limitations for installing OpenShift Container Platform on a single node. Select the OpenShift Container Platform version. Optional: Edit the pull secret. Click . Click Generate Discovery ISO . Select Full image file to boot with a USB drive or PXE. Select Minimal image file to boot with virtual media. Add SSH public key of the administration node to the Public key field. Click Generate Discovery ISO . Download the discovery ISO. Make a note of the discovery ISO URL for installing with virtual media. 9.2.2. Generating the discovery ISO manually Installing OpenShift Container Platform on a single node requires a discovery ISO, which you can generate with the following procedure. Procedure Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following command: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz > oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Set the OpenShift Container Platform version: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example. latest-4.9 Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL: USD ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep x86_64 | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL > rhcos-live.x86_64.iso Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: networkType: OVNKubernetes clusterNetwork: - cidr: <IP_address>/<prefix> 5 hostPrefix: <prefix> 6 serviceNetwork: - <IP_address>/<prefix> 7 platform: none: {} bootstrapInPlace: installationDisk: <path_to_install_drive> 8 pullSecret: '<pull_secret>' 9 sshKey: | <ssh_key> 10 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the clusterNetwork CIDR. 6 Set the clusterNetwork host prefix. Pods receive their IP addresses from this pool. 7 Set the serviceNetwork CIDR. Services receive their IP addresses from this pool. 8 Set the path to the installation disk drive. 9 Copy the pull secret from the Red Hat OpenShift Cluster Manager . In step 1, click Download pull secret and add the contents to this configuration setting. 10 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Embed the ignition data into the RHCOS ISO: USD alias coreos-installer='podman run --privileged --pull always --rm \ -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data \ -w /data quay.io/coreos/coreos-installer:release' USD cp ocp/bootstrap-in-place-for-live-iso.ign iso.ign USD coreos-installer iso ignition embed -fi iso.ign rhcos-live.x86_64.iso 9.2.3. Installing with USB media Installing with USB media involves creating a bootable USB drive with the discovery ISO on the administration node. Booting the server with the USB drive prepares the node for a single node installation. Procedure On the administration node, insert a USB drive into a USB port. Create a bootable USB drive: # dd if=<path-to-iso> of=<path/to/usb> status=progress For example: # dd if=discovery_image_sno.iso of=/dev/sdb status=progress After the ISO is copied to the USB drive, you can use the USB drive to install OpenShift Container Platform. On the server, insert the USB drive into a USB port. Reboot the server and enter the BIOS settings upon reboot. Change boot drive order to make the USB drive boot first. Save and exit the BIOS settings. The server will boot with the discovery image. 9.2.4. Monitoring the installation with the Assisted Installer If you created the ISO using the Assisted Installer, use this procedure to monitor the installation. Procedure On the administration host, return to the browser and refresh the page. If necessary, reload the Install OpenShift with the Assisted Installer page and select the cluster name. Click until you reach step 3, Networking . Select a subnet from the available subnets. Keep Use the same host discovery SSH key checked. You can change the SSH public key, if necessary. Click to proceed to the Review and Create step. Click Install cluster . Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the discovery image to the server's drive, the server will restart. Remove the USB drive and reset the BIOS to boot to the server's local media rather than the USB drive. The server will restart several times, deploying the control plane. 9.2.5. Monitoring the installation manually If you created the ISO manually, use this procedure to monitor the installation. Procedure Monitor the installation: USD ./openshift-install --dir=ocp wait-for install-complete The server will restart several times while deploying the control plane. Optional: After the installation is complete, check the environment: USD export KUBECONFIG=ocp/auth/kubeconfig USD oc get nodes USD oc get clusterversion
[ "example.com", "<cluster-name>.example.com", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz > oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "OCP_VERSION=<ocp_version> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep x86_64 | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL > rhcos-live.x86_64.iso", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: networkType: OVNKubernetes clusterNetwork: - cidr: <IP_address>/<prefix> 5 hostPrefix: <prefix> 6 serviceNetwork: - <IP_address>/<prefix> 7 platform: none: {} bootstrapInPlace: installationDisk: <path_to_install_drive> 8 pullSecret: '<pull_secret>' 9 sshKey: | <ssh_key> 10", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'", "cp ocp/bootstrap-in-place-for-live-iso.ign iso.ign", "coreos-installer iso ignition embed -fi iso.ign rhcos-live.x86_64.iso", "dd if=<path-to-iso> of=<path/to/usb> status=progress", "dd if=discovery_image_sno.iso of=/dev/sdb status=progress", "./openshift-install --dir=ocp wait-for install-complete", "export KUBECONFIG=ocp/auth/kubeconfig", "oc get nodes", "oc get clusterversion" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-on-a-single-node
Chapter 7. Developing collections
Chapter 7. Developing collections Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. Red Hat provides Ansible Content Collections on Ansible automation hub that contain both Red Hat Ansible Certified Content and Ansible validated content. If you have installed private automation hub, you can create collections for your organization and push them to private automation hub so that you can use them in job templates in Ansible Automation Platform. You can use collections to package and distribute plug-ins. These plug-ins are written in Python. You can also create collections to package and distribute Ansible roles, which are expressed in YAML. You can also include playbooks and custom plug-ins that are required for these roles in the collection. Typically, collections of roles are distributed for use within your organization.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/devtools-develop-collections_develop-automation-content
Chapter 27. Storage [operator.openshift.io/v1]
Chapter 27. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 27.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 27.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. vsphereStorageDriver string VSphereStorageDriver indicates the storage driver to use on VSphere clusters. Once this field is set to CSIWithMigrationDriver, it can not be changed. If this is empty, the platform will choose a good default, which may change over time without notice. The current default is CSIWithMigrationDriver and may not be changed. DEPRECATED: This field will be removed in a future release. 27.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 27.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 27.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 27.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 27.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 27.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/storages DELETE : delete collection of Storage GET : list objects of kind Storage POST : create a Storage /apis/operator.openshift.io/v1/storages/{name} DELETE : delete a Storage GET : read the specified Storage PATCH : partially update the specified Storage PUT : replace the specified Storage /apis/operator.openshift.io/v1/storages/{name}/status GET : read status of the specified Storage PATCH : partially update status of the specified Storage PUT : replace status of the specified Storage 27.2.1. /apis/operator.openshift.io/v1/storages HTTP method DELETE Description delete collection of Storage Table 27.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Storage Table 27.2. HTTP responses HTTP code Reponse body 200 - OK StorageList schema 401 - Unauthorized Empty HTTP method POST Description create a Storage Table 27.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.4. Body parameters Parameter Type Description body Storage schema Table 27.5. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 202 - Accepted Storage schema 401 - Unauthorized Empty 27.2.2. /apis/operator.openshift.io/v1/storages/{name} Table 27.6. Global path parameters Parameter Type Description name string name of the Storage HTTP method DELETE Description delete a Storage Table 27.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 27.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Storage Table 27.9. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Storage Table 27.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.11. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Storage Table 27.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.13. Body parameters Parameter Type Description body Storage schema Table 27.14. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty 27.2.3. /apis/operator.openshift.io/v1/storages/{name}/status Table 27.15. Global path parameters Parameter Type Description name string name of the Storage HTTP method GET Description read status of the specified Storage Table 27.16. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Storage Table 27.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.18. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Storage Table 27.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 27.20. Body parameters Parameter Type Description body Storage schema Table 27.21. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/storage-operator-openshift-io-v1
Chapter 8. Using the Ignition tool for the RHEL for Edge Simplified Installer images
Chapter 8. Using the Ignition tool for the RHEL for Edge Simplified Installer images RHEL for Edge uses the Ignition tool to inject the user configuration into the images at an early stage of the boot process. The user configuration that the Ignition tool injects includes: The user configuration. Writing files, such as regular files, and systemd units. On the first boot, Ignition reads its configuration either from a remote URL or a file embedded in the simplified installer ISO. Then, Ignition applies that configuration into the image. 8.1. Creating an Ignition configuration file The Butane tool is the preferred option to create an Ignition configuration file. Butane consumes a Butane Config YAML file and produces an Ignition Config in the JSON format. The JSON file is used by a system on its first boot. The Ignition Config applies the configuration in the image, such as user creation, and systemd units installation. Prerequisites You have installed the Butane tool version v0.17.0: Procedure Create a Butane Config file and save it in the .bu format. You must specify the variant entry as r4e , and the version entry as 1.0.0 , for RHEL for Edge images. The butane r4e variant on version 1.0.0 targets Ignition spec version 3.3.0 . The following is a Butane Config YAML file example: Run the following command to consume the Butane Config YAML file and generate an Ignition Config in the JSON format: After you run the Butane Config YAML file to check and generate the Ignition Config JSON file, you might get warnings when using unsupported fields, like partitions, for example. You can fix those fields and rerun the check. You have now an Ignition JSON configuration file that you can use to customize your blueprint. Additional resources RHEL for Edge Specification v1.0.0 8.2. Creating a blueprint in the GUI with support to Ignition When building a Simplified Installer image, you can customize your blueprint by entering the following details in the Ignition page of the blueprint: Firstboot URL - You must enter a URL that points to the Ignition configuration that will be fetched during the first boot. It can be used for both the raw image and simplified installer image. Embedded Data - You must provide the base64 encoded Ignition Configuration file. It can be used only for the Simplified Installer image. To customize your blueprint for a simplified RHEL for Edge image with support to Ignition configuration using the Ignition blueprint customization, follow the steps: Prerequisites You have opened the image builder app from web console in a browser. See Accessing the image builder GUI in the RHEL web console . To fully support the embedded section, coreos-installer-dracut has to be able to define -ignition-url | -ignition-file based on the presence of the OSBuild's file. Procedure Click Create Blueprint in the upper-right corner. A dialog wizard with fields for the blueprint name and description opens. On the Details page: Enter the name of the blueprint and, optionally, its description. Click . On the Ignition page, complete the following steps: On the Firstboot URL field, enter the URL that points to the Ignition configuration to be fetched during the first boot. On the Embedded Data field, drag or upload the base64 encoded Ignition Configuration file. Click . Review the image details and click Create . The image builder dashboard view opens, listing the existing blueprints. You can use the blueprint you created to build your Simplified Installer image. See Creating a RHEL for Edge Simplified Installer image using image builder CLI . 8.3. Creating a blueprint with support to Ignition using the CLI When building a simplified installer image, you can customize your blueprint by adding the customizations.ignition section to it. With that, you can create either a simplified installer image or a raw image that you can use for the bare metal platforms. The customizations.ignition customization in the blueprint enables the configuration files to be used in edge-simplified-installer ISO and edge-raw-image images. For the edge-simplified-installer ISO image, you can customize the blueprint to embed an Ignition configuration file that will be included in the ISO image. For example: You must provide a base64 encoded Ignition configuration file. For both the edge-simplified-installer ISO image and also the edge-raw-image , you can customize the blueprint, by defining a URL that will be fetched to obtain the Ignition configuration at the first boot. For example: You must enter a URL that points to the Ignition configuration that will be fetched during the first boot. To customize your blueprint for a Simplified RHEL for Edge image with support to Ignition configuration, follow the steps: Prerequisites If using the [customizations.ignition.embedded] customization, you must create an Ignition configuration file. If using the [customizations.ignition.firstboot] customization, you must have created a container whose URL points to the Ignition configuration that will be fetched during the first boot. The blueprint customization [customizations.ignition.embedded] section enables coreos-installer-dracut to define -ignition-url | -ignition-file based on the presence of the osbuild's file. Procedure Create a plain text file in the Tom's Obvious, Minimal Language (TOML) format, with the following content: Where: The name is the name and description is the description for your blueprint. The version is the version number according to the Semantic Versioning scheme. The modules and packages describe the package name and matching version glob to be installed into the image. For example, the package name = "tmux" and the matching version glob is version = "3.3a" . Notice that currently there are no differences between packages and modules. The groups are packages groups to be installed into the image. For example groups = "anaconda-tools" group package. If you do not know the modules and groups, leave them empty. Warning If you want to create a user with Ignition, you cannot use the FDO customizations to create a user at the same time. You can create users using Ignition and copy configuration files using FDO. But if you are creating users, create them using Ignition or FDO, but not both at the same time. Push (import) the blueprint to the image builder server: List the existing blueprints to check whether the created blueprint is successfully pushed and exists. Check whether the components and versions listed in the blueprint and their dependencies are valid: You can use the blueprint you created to build your Simplified Installer image. See Creating a RHEL for Edge Simplified Installer image using image builder CLI . Additional resources RHEL for Edge Specification v1.0.0
[ "sudo dnf/yum install -y butane", "variant: r4e version: 1.0.0 ignition: config: merge: - source: http://192.168.122.1:8000/sample.ign passwd: users: - name: core groups: - wheel password_hash: password_hash_here ssh_authorized_keys: - ssh-ed25519 some-ssh-key-here storage: files: - path: /etc/NetworkManager/system-connections/enp1s0.nmconnection contents: inline: | [connection] id=enp1s0 type=ethernet interface-name=enp1s0 [ipv4] address1=192.168.122.42/24,192.168.122.1 dns=8.8.8.8; dns-search= may-fail=false method=manual mode: 0600 - path: /usr/local/bin/startup.sh contents: inline: | #!/bin/bash echo \"Hello, World!\" mode: 0755 systemd: units: - name: hello.service contents: | [Unit] Description=A hello world [Install] WantedBy=multi-user.target enabled: true - name: fdo-client-linuxapp.service dropins: - name: log_trace.conf contents: | [Service] Environment=LOG_LEVEL=trace", "./path/butane example.bu {\"ignition\":{\"config\":{\"merge\":[{\"source\":\"http://192.168.122.1:8000/sample.ign\"}]},\"timeouts\":{\"httpTotal\":30},\"version\":\"3.3.0\"},\"passwd\":{\"users\":[{\"groups\":[\"wheel\"],\"name\":\"core\",\"passwordHash\":\"password_hash_here\",\"sshAuthorizedKeys\":[\"ssh-ed25519 some-ssh-key-here\"]}]},\"storage\":{\"files\":[{\"path\":\"/etc/NetworkManager/system-connections/enp1s0.nmconnection\",\"contents\":{\"compression\":\"gzip\",\"source\":\"data:;base64,H4sIAAAAAAAC/0yKUcrCMBAG3/csf/ObUKQie5LShyX5SgPNNiSr0NuLgiDzNMPM8VBFtHzoQjkxtPp+ITsrGLahKYyyGtoqEYNKwfeZc32OC0lKDb179rfg/HVyPgQ3hv8w/v0WT0k7T+7D/S1Dh7S4MRU5h1XyzqvsHVRg25G4iD5kp1cAAAD//6Cvq2ihAAAA\"},\"mode\":384},{\"path\":\"/usr/local/bin/startup.sh\",\"contents\":{\"source\":\"data:;base64,IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8sIFdvcmxkISIK\"},\"mode\":493}]},\"systemd\":{\"units\":[{\"contents\":\"[Unit]\\nDescription=A hello world\\n[Install]\\nWantedBy=multi-user.target\",\"enabled\":true,\"name\":\"hello.service\"},{\"dropins\":[{\"contents\":\"[Service]\\nEnvironment=LOG_LEVEL=trace\\n\",\"name\":\"log_trace.conf\"}],\"name\":\"fdo-client-linuxapp.service\"}]}}", "[customizations.ignition.embedded] config = \"eyJ --- BASE64 STRING TRIMMED --- 19fQo=\"", "[customizations.ignition.firstboot] url = \"http://your_server/ignition_configuration.ig\"", "name = \"simplified-installer-blueprint\" description = \"Blueprint with Ignition for the simplified installer image\" version = \"0.0.1\" packages = [] modules = [] groups = [] distro = \"\" [customizations.ignition.embedded] config = \"eyJ --- BASE64 STRING TRIMMED --- 19fQo=\"", "composer-cli blueprints push blueprint-name .toml", "composer-cli blueprints show blueprint-name", "composer-cli blueprints depsolve blueprint-name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/assembly_using-the-ignition-tool-for-the-rhel-for-edge-simplified-installer-images_composing-installing-managing-rhel-for-edge-images
6.4. User Private Groups
6.4. User Private Groups Red Hat Enterprise Linux uses a user private group ( UPG ) scheme, which makes UNIX groups easier to manage. A UPG is created whenever a new user is added to the system. A UPG has the same name as the user for which it was created and that user is the only member of the UPG. UPGs make it safe to set default permissions for a newly created file or directory which allow both the user and that user's group to make modifications to the file or directory. The setting which determines what permissions are applied to a newly created file or directory is called a umask and is configured in the /etc/bashrc file. Traditionally on UNIX systems, the umask is set to 022 , which allows only the user who created the file or directory to make modifications. Under this scheme, all other users, including members of the creator's group , are not allowed to make any modifications. However, under the UPG scheme, this "group protection" is not necessary since every user has their own private group. 6.4.1. Group Directories Many IT organizations like to create a group for each major project and then assign people to the group if they need to access that project's files. Using this traditional scheme, managing files has been difficult; when someone creates a file, it is associated with the primary group to which they belong. When a single person works on multiple projects, it is difficult to associate the right files with the right group. Using the UPG scheme, however, groups are automatically assigned to files created within a directory with the setgid bit set. The setgid bit makes managing group projects that share a common directory very simple because any files a user creates within the directory are owned by the group which owns the directory. Lets say, for example, that a group of people work on files in the /usr/lib/emacs/site-lisp/ directory. Some people are trusted to modify the directory, but certainly not everyone is trusted. First create an emacs group, as in the following command: To associate the contents of the directory with the emacs group, type: Now, it is possible to add the proper users to the group with the gpasswd command: To allow users to create files within the directory, use the following command: When a user creates a new file, it is assigned the group of the user's default private group. , set the setgid bit, which assigns everything created in the directory the same group permission as the directory itself ( emacs ). Use the following command: At this point, because each user's default umask is 002, all members of the emacs group can create and edit files in the /usr/lib/emacs/site-lisp/ directory without the administrator having to change file permissions every time users write new files.
[ "/usr/sbin/groupadd emacs", "chown -R root.emacs /usr/lib/emacs/site-lisp", "/usr/bin/gpasswd -a <username> emacs", "chmod 775 /usr/lib/emacs/site-lisp", "chmod 2775 /usr/lib/emacs/site-lisp" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-users-groups-private-groups
Chapter 98. DockerOutput schema reference
Chapter 98. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Property type Description image string The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. pushSecret string Container Registry Secret with the credentials for pushing the newly built image. additionalKanikoOptions string array Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --custom-platform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run, --registry-certificate, --registry-client-cert. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. type string Must be docker .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-dockeroutput-reference
Chapter 5. Configuring the JBoss Server Migration Tool
Chapter 5. Configuring the JBoss Server Migration Tool 5.1. Configuring the JBoss Server Migration Tool Properties You use properties to configure the JBoss Server Migration Tool logging and reporting output and to determine which components and configurations you want to migrate. You can configure these properties using a combination of the following methods. You can configure the properties file defined within the tool . You can pass user properties on the command line . You can configure system environment variables . 5.1.1. Configure the JBoss Server Migration Tool Using the Tool Properties You can configure the JBoss Server Migration Tool using the environment.properties file located in the EAP_HOME /migration/configuration/ directory. This standard Java properties file provides the default values for all of the valid properties that can be configured when migrating to the target server. To change a default value, remove the # comment character that precedes the property and set it to the value you need. 5.1.2. Configure the JBoss Server Migration Tool Using User Properties If you prefer, you can create a standard Java properties file that defines JBoss Server Migration Tool configuration properties and pass its path on the command line using the --environment or -e argument. This path can be an absolute path or a path relative to the current directory. Properties defined in files passed on the command line using the --environment or -e argument override the ones defined in the EAP_HOME /migration/configuration/environment.properties file. 5.1.3. Configure the JBoss Server Migration Tool Using System Properties You can configure the JBoss Server Migration Tool by passing system properties on the command line using the following syntax. The system property name should be specified as jboss.server.migration. concatenated with the environment property name. The following example demonstrates how to specify the name of the XML report as migration-report.xml when starting the JBoss Server Migration Tool. Environment properties specified on the command line override both user configuration properties and tool configuration properties. Warning Configuring the JBoss Server Migration Tool by passing system properties on the command line does not currently work for the following properties. report.html.fileName report.html.maxTaskPathSizeToDisplaySubtasks report.html.templateFileName report.summary.maxTaskPathSizeToDisplaySubtasks report.xml.fileName This is a known issue that should be addressed in the version of JBoss EAP. For more information about this issue, see JBEAP-12901 . 5.2. Configuring Logging for JBoss Server Migration Tool The JBoss Server Migration Tool uses the JBoss Logging framework to log the progress of the migration. Results are written to the console and also to a file named migration.log , which is located in the EAP_HOME /migration/logs/ directory. This log file is created if it does not already exist, and its content is overwritten on each subsequent execution of the tool. The logging configuration is provided by the EAP_HOME /migration/logging.properties file. You can modify this configuration file or you can specify an alternative logging configuration file by using the logging.configuration system property on the command line. 5.3. Configuring Modules Migration The JBoss Server Migration Tool can migrate any module installed in the source server as long as that module is not already installed on the target server. Module migration can be done explicitly by request, or implicitly because another module or migrated server configuration depends on it. 5.3.1. Modules Environment Properties You can control whether a module should be migrated or not by using the modules.includes and modules.excludes environment properties. The syntax for a module ID is name:slot . The :slot is optional and if it is not specified defaults to main . A module whose ID is referenced by the modules.excludes environment property is never migrated. A module whose ID is referenced by the modules.includes environment property is always migrated, unless it is referenced by the modules.excludes environment property. 5.3.2. Configuring Modules Properties The environment properties used to migrate modules can be configured in any of the following ways: You can configure the properties in the tool's EAP_HOME /migration/configuration/environment.properties file. You can include the above properties in your own custom properties file, and then pass the properties file name on the command line using the --environment argument. You can pass the information on the command line using a system property. The environment property names must be prefixed with jboss.server.migration. , for example: Warning The JBoss Server Migration Tool does not verify that the source module is compatible with the target server. An incompatible migrated module can cause the target server to malfunction or not work at all. A module can be incompatible due to a dependency on a module that is installed on both the source and target servers, but includes or exposes different resources on each one. 5.4. Configuring Reporting for JBoss Server Migration Tool 5.4.1. Configuring the Task Summary Log You can customize the generation of the Task Summary using the following environment property. Property Name Type Property Description and Default Value report.summary.maxTaskPathSizeToDisplaySubtasks Integer Include migrated subtasks in the summary where the level is less than or equal to the specified integer. Defaults to 5 . 5.4.2. Configuring the HTML Report You can customize the HTML report using the following environment properties. Property Name Type Property Description and Default Value report.html.fileName String The name of the HTML report file. If not set, the report is not generated. Defaults to EAP_HOME /migration/reports/migration-report.html . report.html.maxTaskPathSizeToDisplaySubtasks Integer Include migrated subtasks in the summary where the level is less than or equal to the specified integer. Defaults to 4 . report.html.templateFileName String The HTML report template file name. Defaults to migration-report-template.html . 5.4.3. Configuring the XML Report You can customize the XML report using the following environment properties. Property Name Type Property Description and Default Value report.xml.fileName String The name of the XML report file. If not set, the report is not generated. Defaults to EAP_HOME /migration/reports/migration-report.xml . 5.5. Configuring the Migration of the Standalone Server Configuration You can configure the JBoss Server Migration Tool to skip the migration of a standalone server entirely, to provide the configuration file names that you want to migrate, or to provide alternate paths for the source or target server's base and configuration directories. You can customize the migration of the standalone server configuration using the following environment properties. Table 5.1. Standalone Server Migration Environment Properties Property Name Property Description standalone.skip If set to true , the tool skips the entire standalone server migration. server.source.standalone.serverDir Defines an alternative path for the source server's standalone directory, which defaults to the source server's EAP_HOME /standalone/ directory. server.source.standalone.configDir Defines an alternative path for the source server's standalone configuration directory, which defaults to the source server's EAP_HOME /standalone/configuration/ directory. server.source.standalone.configFiles A comma-delimited list of the source server's standalone configurations to be migrated. server.target.standalone.serverDir Defines an alternative path for the target server's standalone directory, which defaults to the target server's EAP_HOME /standalone/ directory. server.target.standalone.configDir Defines an alternative path for the target server's standalone configuration directory, which defaults to the target server's EAP_HOME /standalone/configuration/ directory. For information about how to configure the JBoss Server Migration Tool using these properties, see Configuring the JBoss Server Migration Tool . 5.6. Configuring the Migration of a Managed Domain Configuration You can configure the JBoss Server Migration Tool to skip the migration of a managed domain entirely, to provide the configuration file names that you want to migrate, or to provide alternate paths for the source or target server's base and configuration directories. You can customize the migration of the managed domain configuration using the following environment properties. Table 5.2. Managed Domain Migration Environment Properties Property Name Property Description domain.skip If set to true , the tool skips the entire managed domain migration. server.source.domain.domainDir Defines an alternative path for the source server's managed domain directory, which defaults to the source server's EAP_HOME /domain/ directory. server.source.domain.configDir Defines an alternative path for the source server's managed domain configuration directory, which defaults to the source server's EAP_HOME /domain/configuration/ directory. server.source.domain.domainConfigFiles A comma-delimited list of the source server's managed domain configuration files that are to be migrated. server.source.domain.hostConfigFiles A comma-delimited list of the source server's host configuration files that are to be migrated. server.target.domain.domainDir Defines an alternative path for the target server's managed domain directory, which defaults to the target server's EAP_HOME /domain/ directory. server.target.domain.configDir Defines an alternative path for the target server's managed domain configuration directory, which defaults to the target server's EAP_HOME /domain/configuration/ directory. For information about how to configure the JBoss Server Migration Tool using these properties, see Configuring the JBoss Server Migration Tool . 5.7. Configure the Migration Tasks Performed by the JBoss Server Migration Tool By default, the JBoss Server Migration Tool automatically migrates all components and subsystems for each standalone server, managed domain, and host configuration you choose to migrate. You can customize the execution of specific tasks and subtasks performed by the tool using environment properties. For example, you can configure the tool to skip the removal of unsupported subsystems or to skip the migration of deployments. The tasks performed by the tool are dependent upon the type of server configuration and the version of the source server from which you are migrating. Information about how to configure environment properties to customize the tasks performed by the JBoss Server Migration Tool can be found in the following sections.
[ "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME --environment path/to /my-server-migration.properties", "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME -Djboss.server.migration. PROPERTY_NAME = PROPERTY_VALUE", "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME -Djboss.server.migration.report.xml.fileName=migration-report.xml", "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME -Dlogging.configuration=file: EAP_PREVIOUS_HOME /migration/configuration/my-alternate-logging.properties", "modules.includes=com.example.moduleA,com.example.moduleB modules.excludes=com.example.moduleC", "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME --environment PATH_TO_MY_PROPERTIES_FILE", "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME -Djboss.server.migration.modules.includes=\"com.example.moduleA\" -Djboss.server.migration.modules.excludes=\"com.example.moduleC,com.example.moduleD\"" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_the_jboss_server_migration_tool/configuring_the_migration_tool
Chapter 7. EAP Operator for Automating Application Deployment on OpenShift
Chapter 7. EAP Operator for Automating Application Deployment on OpenShift EAP operator is a JBoss EAP-specific controller that extends the OpenShift API. You can use the EAP operator to create, configure, manage, and seamlessly upgrade instances of complex stateful applications. The EAP operator manages multiple JBoss EAP Java application instances across the cluster. It also ensures safe transaction recovery in your application cluster by verifying all transactions are completed before scaling down the replicas and marking a pod as clean for termination. The EAP operator uses StatefulSet for the appropriate handling of Jakarta Enterprise Beans remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted. You must install the EAP operator using OperatorHub, which can be used by OpenShift cluster administrators to discover, install, and upgrade operators. In OpenShift Container Platform 4, you can use the Operator Lifecycle Manager (OLM) to install, update, and manage the lifecycle of all operators and their associated services running across multiple clusters. The OLM runs by default in OpenShift Container Platform 4. It aids cluster administrators in installing, upgrading, and granting access to operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install operators, as well as grant specific projects access to use the catalog of operators available on the cluster. For more information about operators and the OLM, see the OpenShift documentation . 7.1. Installing EAP Operator Using the Web Console As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform web console. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster. Here are a few points you must be aware of before installing the EAP operator using the web console: Installation Mode: Choose All namespaces on the cluster (default) to have the operator installed on all namespaces or choose individual namespaces, if available, to install the operator only on selected namespaces. Update Channel: If the EAP operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy: You can choose automatic or manual updates. If you choose automatic updates for the EAP operator, when a new version of the operator is available, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of EAP operator. If you choose manual updates, when a newer version of the operator is available, the OLM creates an update request. You must then manually approve the update request to have the operator updated to the new version. Note The following procedure might change in accordance with the modifications in the OpenShift Container Platform web console. For the latest and most accurate procedure, see the Installing from the OperatorHub using the web console section in the latest version of the Working with Operators in OpenShift Container Platform guide. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure In the OpenShift Container Platform web console, navigate to Operators -> OperatorHub . Scroll down or type EAP into the Filter by keyword box to find the EAP operator. Select JBoss EAP operator and click Install . On the Create Operator Subscription page: Select one of the following: All namespaces on the cluster (default) installs the operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster installs the operator in a specific, single namespace that you choose. The operator is made available for use only in this single namespace. Select an Update Channel . Select Automatic or Manual approval strategy, as described earlier. Click Subscribe to make the EAP operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a manual approval strategy, the subscription's upgrade status remains Upgrading until you review and approve its install plan. After you approve the install plan on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an automatic approval strategy, the upgrade status moves to Up to date without intervention. After the subscription's upgrade status is Up to date , select Operators Installed Operators to verify that the EAP ClusterServiceVersion (CSV) shows up and its Status changes to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status displayed is InstallSucceeded in the openshift-operators namespace. In other namespaces the status displayed is Copied . If the Status field does not change to InstallSucceeded , check the logs in any pod in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 7.2. Installing EAP Operator Using the CLI As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform CLI. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster. When installing the EAP operator from the OperatorHub using the CLI, use the oc command to create a Subscription object. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc tool in your local system. Procedure View the list of operators available to the cluster from the OperatorHub: Create a Subscription object YAML file (for example, eap-operator-sub.yaml ) to subscribe a namespace to your EAP operator. The following is an example Subscription object YAML file: 1 Name of the operator to subscribe to. 2 The EAP operator is provided by the redhat-operators CatalogSource. For information about channels and approval strategy, see the web console version of this procedure. Create the Subscription object from the YAML file: The EAP operator is successfully installed. At this point, the OLM is aware of the EAP operator. A ClusterServiceVersion (CSV) for the operator appears in the target namespace, and APIs provided by the EAP operator is available for creation. 7.3. The eap-s2i-build template for creating application images Use the eap-s2i-build template to create your application images. The eap-s2i-build template adds several parameters to configure the location of the application source repository and the EAP S2I images to use to build your application. The APPLICATION_IMAGE parameter in the eap-s2i-build template specifies the name of the imagestream corresponding to the application image. For example, if you created an application image named my-app from the eap-s2i-build template, you can use the my-app:latest imagestreamtag from the my-app imagestream to deploy your application. For more information about the parameters used in the eap-s2i-build template, see Building an application image using eap-s2i-build template . With this template, the EAP operator can seamlessly upgrade your applications deployed on OpenShift. To enable seamless upgrades, you must configure a webhook in your GitHub repository and specify the webhook in the build configuration. The webhook notifies OpenShift when your repository is updated and a new build is triggered. You can use this template to build an application image using an imagestream for any JBoss EAP version, such as JBoss EAP 7.4, JBoss EAP XP, or JBoss EAP CD. Additional resources Building an application image using eap-s2i-build template . 7.4. Building an application image using eap-s2i-build template The eap-s2i-build template adds several parameters to configure the location of your application source repository and the EAP S2I images to use to build the application. With this template, you can use an imagestream for any JBoss EAP version, such as JBoss EAP 7.4, JBoss EAP XP, or JBoss EAP CD. Procedure Import EAP images in OpenShift. For more information, see Importing the OpenShift image streams and templates for JBoss EAP XP . Configure the imagestream to receive updates about the changes in the application imagestream and to trigger new builds. For more information, see Configuring periodic importing of imagestreamtags . Create the eap-s2i-build template for building the application image using EAP S2I images: This eap-s2i-build template creates two build configurations and two imagestreams corresponding to the intermediate build artifacts and the final application image. Process the eap-s2i-build template with parameters to create the resources for the final application image. The following example creates an application image, my-app : 1 The name for the application imagestream. The application image is tagged with the latest tag. 2 The imagestreamtag for EAP builder image. 3 The imagestreamtag for EAP runtime image. 4 The namespace in which the imagestreams for Red Hat Middleware images are installed. If omitted, the openshift namespace is used. Modify this only if you have installed the imagestreams in a namespace other than openshift . 5 The Git source URL of your application. 6 The Git branch or tag reference 7 The path within the Git repository that contains the application to build. Prepare the application image for deployment using the EAP operator. Configure the WildFlyServer resource: Apply the settings and let the EAP operator create a new WildFlyServer resource that references this application image: View the WildFlyServer resource with the following command: Additional resources For more information about importing an application imagestream, see Importing the latest OpenShift image streams and templates for JBoss EAP XP . For more information about periodic importing of imagestreams, see Configuring periodic importing of imagestreamtags . 7.5. Deploying a Java Application on OpenShift Using the EAP Operator The EAP operator helps automate Java application deployment on OpenShift. For information about the EAP operator APIs, see EAP Operator: API Information . Prerequisites You have installed EAP operator. For more information about installing the EAP operator, see Installing EAP Operator Using the Webconsole and Installing EAP Operator Using the CLI . You have built a Docker image of the user application using JBoss EAP for OpenShift Source-to-Image (S2I) build image. The APPLICATION_IMAGE parameter in your eap-s2i-build template has an imagestream, if you want to enable automatic upgrade of your application after it is deployed on OpenShift. For more information about building your application image using the eap-s2i-build template, see Building an application image using eap-s2i-build template . You have created a Secret object, if your application's CustomResourceDefinition (CRD) file references one. For more information about creating a new Secret object, see Creating a Secret . You have created a ConfigMap , if your application's CRD file references one. For information about creating a ConfigMap , see Creating a ConfigMap . You have created a ConfigMap from the standalone.xml file, if you choose to do so. For information about creating a ConfigMap from the standalone.xml file, see Creating a ConfigMap from a standalone.xml File . Note Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 7. Procedure Open your web browser and log on to OperatorHub. Select the Project or namespace you want to use for your Java application. Navigate to Installed Operator and select JBoss EAP operator . On the Overview tab, click the Create Instance link. Specify the application image details. The application image specifies the Docker image that contains the Java application. The image must be built using the JBoss EAP for OpenShift Source-to-Image (S2I) build image. If the applicationImage field corresponds to an imagestreamtag, any change to the image triggers an automatic upgrade of the application. You can provide any of the following references of the JBoss EAP for OpenShift application image: The name of the image: mycomp/myapp A tag: mycomp/myapp:1.0 A digest: mycomp/myapp:@sha256:0af38bc38be93116b6a1d86a9c78bd14cd527121970899d719baf78e5dc7bfd2 An imagestreamtag: my-app:latest Specify the size of the application. For example: Configure the application environment using the env spec . The environment variables can come directly from values, such as POSTGRESQL_SERVICE_HOST or from Secret objects, such as POSTGRESQL_USER. For example: Complete the following optional configurations that are relevant to your application deployment: Specify the storage requirements for the server data directory. For more information, see Configuring Persistent Storage for Applications . Specify the name of the Secret you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example: The Secret is mounted at /etc/secrets/<secret name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The Secret is mounted as a volume inside the pod. The following example demonstrates commands that you can use to find key values: Note Modifying a Secret object might lead to project inconsistencies. Instead of modifying an existing Secret object, Red Hat recommends creating a new object with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded. Specify the name of the ConfigMap you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example: The ConfigMap is mounted at /etc/configmaps/<configmap name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The ConfigMap is mounted as a volume inside the pod. To find the key values: Note Modifying a ConfigMap might lead to project inconsistencies. Instead of modifying an existing ConfigMap , Red Hat recommends creating a new ConfigMap with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded. If you choose to have your own standalone ConfigMap , provide the name of the ConfigMap as well as the key for the standalone.xml file: Note Creating a ConfigMap from the standalone.xml file is not supported in JBoss EAP 7. If you want to disable the default HTTP route creation in OpenShift, set disableHTTPRoute to true : 7.5.1. Creating a Secret If your application's CustomResourceDefinition (CRD) file references a Secret , you must create the Secret before deploying your application on OpenShift using the EAP operator. Procedure To create a Secret : 7.5.2. Creating a ConfigMap If your application's CustomResourceDefinition (CRD) file references a ConfigMap in the spec.ConfigMaps field, you must create the ConfigMap before deploying your application on OpenShift using the EAP operator. Procedure To create a configmap: 7.5.3. Creating a ConfigMap from a standalone.xml File You can create your own JBoss EAP standalone configuration instead of using the one in the application image that comes from JBoss EAP for OpenShift Source-to-Image (S2I). The standalone.xml file must be put in a ConfigMap that is accessible by the operator. Note NOTE: Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 7. Procedure To create a ConfigMap from the standalone.xml file: 7.5.4. Configuring Persistent Storage for Applications If your application requires persistent storage for some data, such as, transaction or messaging logs that must persist across pod restarts, configure the storage spec. If the storage spec is empty, an EmptyDir volume is used by each pod of the application. However, this volume does not persist after its corresponding pod is stopped. Procedure Specify volumeClaimTemplate to configure resources requirements to store the JBoss EAP standalone data directory. The name of the template is derived from the name of JBoss EAP. The corresponding volume is mounted in ReadWriteOnce access mode. The persistent volume that meets this storage requirement is mounted on the /eap/standalone/data directory. 7.6. Deploying the Red Hat Single Sign-On-enabled image by using EAP operator The EAP operator helps you to deploy an EAP application image with Red Hat Single Sign-On enabled on OpenShift. To deploy the application image, configure the environment variables and secrets listed in the table. Prerequisites You have installed the EAP operator. For more information about installing the EAP operator, see Installing EAP operator using the web console and Installing EAP operator using the CLI . You have built the EAP application image by using the eap74-sso-s2i template. For information about building the EAP application image, see Building an application image . Procedure Remove the DeploymentConfig file, created by the eap74-sso-s2i template, from the location where you have built the EAP application image. In the env field of the EAP operator's WildFlyServer resource, configure all the environment variables and secrets . Example configuration Note Ensure that all environment variables and secrets match the image configuration. The value of the parameter SSO_URL varies depending on the user of the OpenShift cluster. The EAP operator mounts the secrets in the /etc/secret directory, whereas the eap74-sso template mounts the secrets in the /etc directory. Save the EAP operator's WildFlyServer resource configuration. 7.7. Viewing metrics of an application using the EAP operator You can view the metrics of an application deployed on OpenShift using the EAP operator. When your cluster administrator enables metrics monitoring in your project, the EAP operator automatically displays the metrics on the OpenShift console. Prerequisites Your cluster administrator has enabled monitoring for your project. For more information, see Enabling monitoring for user-defined projects . Procedure In the OpenShift Container Platform web console, navigate to Monitoring -> Metrics . On the Metrics screen, type the name of your application in the text box to select your application. The metrics for your application appear on the screen. Note All metrics related to JBoss EAP application server are prefixed with jboss . For example, jboss_undertow_request_count_total . 7.8. Uninstalling EAP Operator Using Web Console To delete, or uninstall, EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator's ClusterServiceVersion (CSV) and deployment. Note To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator. You can uninstall the EAP operator using the web console. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. Procedure From the Operators -> Installed Operators page, select JBoss EAP . On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. When prompted by the Remove Operator Subscription window, optionally select the Also completely remove the Operator from the selected namespace check box if you want all components related to the installation to be removed. This removes the CSV, which in turn removes the pods, deployments, custom resource definitions (CRDs), and custom resources (CRs) associated with the operator. Click Remove . The EAP operator stops running and no longer receives updates. 7.9. Uninstalling EAP Operator using the CLI To delete, or uninstall, the EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator's ClusterServiceVersion (CSV) and deployment. Note To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator. You can uninstall the EAP operator using the command line. When using the command line, you uninstall the operator by deleting the subscription and CSV from the target namespace. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. Procedure Check the current version of the EAP operator subscription in the currentCSV field: Delete the EAP operator's subscription: Delete the CSV for the EAP operator in the target namespace using the currentCSV value from the step: 7.10. EAP Operator for Safe Transaction Recovery For certain types of transactions, EAP operator ensures data consistency before terminating your application cluster by verifying that all transactions are completed before scaling down the replicas and marking a pod as clean for termination. Note Some scenarios are not supported. For more information about the unsupported scenarios, see Unsupported Transaction Recovery Scenarios . This means that if you want to remove the deployment safely without data inconsistencies, you must first scale down the number of pods to 0, wait until all pods are terminated, and only then delete the wildflyserver instance. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. When the scaledown process begins the pod state ( oc get pod <pod_name> ) is still marked as Running , because the pod must complete all the unfinished transactions, including the remote enterprise beans calls that target it. If you want to monitor the state of the scaledown process, observe the status of the wildflyserver instance. For more information, see Monitoring the Scaledown Process . For information about pod statuses during scaledown, see Pod Status During Scaledown . 7.10.1. StatefulSets for Stable Network Host Names The EAP operator that manages the wildflyserver creates a StatefulSet as an underlying object managing the JBoss EAP pods. A StatefulSet is the workload API object that manages stateful applications. It manages the deployment and scaling of a set of pods, and provides guarantees about the ordering and uniqueness of these pods. The StatefulSet ensures that the pods in a cluster are named in a predefined order. It also ensures that pod termination follows the same order. For example, let us say, pod-1 has a transaction with heuristic outcome, and so is in the state of SCALING_DOWN_RECOVERY_DIRTY . Even if pod-0 is in the state of SCALING_DOWN_CLEAN , it is not terminated before pod-1. Until pod-1 is clean and is terminated, pod-0 remains in the SCALING_DOWN_CLEAN state. However, even if pod-0 is in the SCALING_DOWN_CLEAN state, it does not receive any new request and is practically idle. Note Decreasing the replica size of the StatefulSet or deleting the pod itself has no effect and such changes are reverted. 7.10.2. Monitoring the Scaledown Process If you want to monitor the state of the scaledown process, you must observe the status of the wildflyserver instance. For more information about the different pod statuses during scaledown, see Pod Status During Scaledown . Procedure To observe the state of the scaledown process: The WildFlyServer.Status.Scalingdown Pods and WildFlyServer.Status.Replicas fields shows the overall state of the active and non-active pods. The Scalingdown Pods field shows the number of pods which are about to be terminated when all the unfinished transactions are complete. The WildFlyServer.Status.Replicas field shows the current number of running pods. The WildFlyServer.Spec.Replicas field shows the number of pods in ACTIVE state. If there are no pods in scaledown process the numbers of pods in the WildFlyServer.Status.Replicas and WildFlyServer.Spec.Replicas fields are equal. 7.10.2.1. Pod Status During Scaledown The following table describes the different pod statuses during scaledown: Table 7.1. Pod Status Description Pod Status Description ACTIVE The pod is active and processing requests. SCALING_DOWN_RECOVERY_INVESTIGATION The pod is about to be scaled down. The scale-down process is under investigation about the state of transactions in JBoss EAP. SCALING_DOWN_RECOVERY_DIRTY JBoss EAP contains some incomplete transactions. The pod cannot be terminated until they are cleaned. The transaction recovery process is periodically run at JBoss EAP and it waits until the transactions are completed SCALING_DOWN_CLEAN The pod is processed by transaction scaled down processing and is marked as clean to be removed from the cluster. 7.10.3. Scaling Down During Transactions with Heuristic Outcomes When the outcome of a transaction is unknown, automatic transaction recovery is impossible. You must then manually recover your transactions. Prerequisites The status of your pod is stuck at SCALING_DOWN_RECOVERY_DIRTY . Procedure Access your JBoss EAP instance using CLI. Resolve all the heuristics transaction records in the transaction object store. For more information, see Recovering Heuristic Outcomes in the Managing Transactions on JBoss EAP . Remove all records from the enterprise bean client recovery folder. Remove all files from the pod enterprise bean client recovery directory: The status of your pod changes to SCALING_DOWN_CLEAN and the pod is terminated. 7.10.4. Configuring the transactions subsystem to use the JDBC storage for transaction log In cases where the system does not provide a file system to store transaction logs , use the JBoss EAP S2I image to configure the JDBC object store. Important S2I environment variables are not usable when JBoss EAP is deployed as a bootable JAR. In this case, you must create a Galleon layer or configure a CLI script to make the necessary configuration changes. The JDBC object store can be set up with the environment variable TX_DATABASE_PREFIX_MAPPING . This variable has the same structure as DB_SERVICE_PREFIX_MAPPING . Prerequisite You have created a datasource based on the value of the environment variables. You have ensured consistent data reads and writes permissions exist between the database and the transaction manager communicating over the JDBC object store. For more information see configuring JDBC data sources Procedure Set up and configure the JDBC object store through the S2I environment variable. Example Verification You can verify both the datasource configuration and transaction subsystem configuration by checking the standalone-openshift.xml configuration file oc rsh <podname> cat /opt/eap/standalone/configuration/standalone-openshift.xml . Expected output: Additional resources For more information about creating datasources by using either the management console or the management CLI, see Creating Datasources in the JBoss EAP Configuration Guide . 7.11. Automatically scaling pods with the horizontal pod autoscaler HPA With EAP operator, you can use a horizontal pod autoscaler HPA to automatically increase or decrease the scale of an EAP application based on metrics collected from the pods that belong to that EAP application. Note Using HPA ensures that transaction recovery is still handled when a pod is scaled down. Procedure Configure the resources: Important You must specify the resource limits and requests for containers in a pod for autoscaling to work as expected. Create the Horizontal pod autoscaler: Verification You can verify the HPA behavior by checking the replicas. The number of replicas increase or decrease depending on the increase or decrease of the workload. Additional resources https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/nodes/index#nodes-pods-autoscaling 7.12. Jakarta Enterprise Beans Remoting on OpenShift For JBoss EAP to work correctly with enterprise bean remoting calls between different JBoss EAP clusters on OpenShift, you must understand the enterprise bean remoting configuration options on OpenShift. Note When deploying on OpenShift, consider the use of the EAP operator. The EAP operator uses StatefulSet for the appropriate handling of enterprise bean remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted. Network hostname stability is required when the JBoss EAP instance is contacted using an enterprise bean remote call with transaction propagation. The JBoss EAP instance must be reachable under the same hostname even if the pod restarts. The transaction manager, which is a stateful component, binds the persisted transaction data to a particular JBoss EAP instance. Because the transaction log is bound to a specific JBoss EAP instance, it must be completed in the same instance. To prevent data loss when the JDBC transaction log store is used, make sure your database provides data-consistent reads and writes. Consistent data reads and writes are important when the database is scaled horizontally with multiple instances. An enterprise bean remote caller has two options to configure the remote calls: Define a remote outbound connection. For more information, see Configuring a Remote Outbound Connection . Use a programmatic JNDI lookup for the bean at the remote server. For more information, see Using Remote Jakarta Enterprise Beans Clients . You must reconfigure the value representing the address of the target node depending on the enterprise bean remote call configuration method. Note The name of the target enterprise bean for the remote call must be the DNS address of the first pod. The StatefulSet behaviour depends on the ordering of the pods. The pods are named in a predefined order. For example, if you scale your application to three replicas, your pods have names such as eap-server-0 , eap-server-1 , and eap-server-2 . The EAP operator also uses a headless service that ensures a specific DNS hostname is assigned to the pod. If the application uses the EAP operator, a headless service is created with a name such as eap-server-headless . In this case, the DNS name of the first pod is eap-server-0.eap-server-headless . The use of the hostname eap-server-0.eap-server-headless ensures that the enterprise bean call reaches any EAP instance connected to the cluster. A bootstrap connection is used to initialize the Jakarta Enterprise Beans client, which gathers the structure of the EAP cluster as the step. 7.12.1. Configuring Jakarta Enterprise Beans on OpenShift You must configure the JBoss EAP servers that act as callers for enterprise bean remoting. The target server must configure a user with permission to receive the enterprise bean remote calls. Prerequisites You have used the EAP operator and the supported JBoss EAP for OpenShift S2I image for deploying and managing the JBoss EAP application instances on OpenShift. The clustering is set correctly. For more information about JBoss EAP clustering, see the Clustering section. Procedure Create a user in the target server with permission to receive the enterprise bean remote calls: Configure the caller JBoss EAP application server. Create the eap-config.xml file in USDJBOSS_HOME/standalone/configuration using the custom configuration functionality. For more information, see Custom Configuration . Configure the caller JBoss EAP application server with the wildfly.config.url property: Note If you use the following example for your configuration, replace the >>PASTE_... _HERE<< with username and password you configured. Example Configuration <configuration> <authentication-client xmlns="urn:elytron:1.0"> <authentication-rules> <rule use-configuration="jta"> <match-abstract-type name="jta" authority="jboss" /> </rule> </authentication-rules> <authentication-configurations> <configuration name="jta"> <sasl-mechanism-selector selector="DIGEST-MD5" /> <providers> <use-service-loader /> </providers> <set-user-name name="PASTE_USER_NAME_HERE" /> <credentials> <clear-password password="PASTE_PASSWORD_HERE" /> </credentials> <set-mechanism-realm name="ApplicationRealm" /> </configuration> </authentication-configurations> </authentication-client> </configuration>
[ "oc get packagemanifests -n openshift-marketplace | grep eap NAME CATALOG AGE eap Red Hat Operators 43d", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: eap namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: eap 1 source: redhat-operators 2 sourceNamespace: openshift-marketplace", "oc apply -f eap-operator-sub.yaml oc get csv -n openshift-operators NAME DISPLAY VERSION REPLACES PHASE eap-operator.v1.0.0 JBoss EAP 1.0.0 Succeeded", "oc replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/master/eap-s2i-build.yaml", "oc process eap-s2i-build -p APPLICATION_IMAGE=my-app \\ 1 -p EAP_IMAGE=jboss-eap-xp1-openjdk11-openshift:1.0 \\ 2 -p EAP_RUNTIME_IMAGE=jboss-eap-xp1-openjdk11-runtime-openshift:1.0 \\ 3 -p EAP_IMAGESTREAM_NAMESPACE=USD(oc project -q) \\ 4 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts.git \\ 5 -p SOURCE_REPOSITORY_REF=xp-1.0.x \\ 6 -p CONTEXT_DIR=microprofile-config | oc create -f - 7", "cat > my-app.yaml<<EOF apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: my-app spec: applicationImage: 'my-app:latest' replicas: 1 EOF", "oc apply -f my-app.yaml", "oc get wfly my-app", "spec: replicas:2", "spec: env: - name: POSTGRESQL_SERVICE_HOST value: postgresql - name: POSTGRESQL_SERVICE_PORT value: '5432' - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database-name name: postgresql - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-user name: postgresql - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: postgresql", "spec: secrets: - my-secret", "ls /etc/secrets/my-secret/ my-key my-password cat /etc/secrets/my-secret/my-key devuser cat /etc/secrets/my-secret/my-password my-very-secure-pasword", "spec: configMaps: - my-config", "ls /etc/configmaps/my-config/ key1 key2 cat /etc/configmaps/my-config/key1 value1 cat /etc/configmaps/my-config/key2 value2", "standaloneConfigMap: name: clusterbench-config-map key: standalone-openshift.xml", "spec: disableHTTPRoute: true", "oc create secret generic my-secret --from-literal=my-key=devuser --from-literal=my-password='my-very-secure-pasword'", "oc create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2 configmap/my-config created", "oc create configmap clusterbench-config-map --from-file examples/clustering/config/standalone-openshift.xml configmap/clusterbench-config-map created", "spec: storage: volumeClaimTemplate: spec: resources: requests: storage: 3Gi", "cat > my-app.yaml<<EOF apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: my-app spec: applicationImage: 'my-app:latest' replicas: 1 env: - name: SSO_URL value: https://secure-sso-sso-app-demo.openshift32.example.com/auth - name: SSO_REALM value: eap-demo - name: SSO_PUBLIC_KEY value: realm-public-key - name: SSO_USERNAME value: mySsoUser - name: SSO_PASSWORD value: 6fedmL3P - name: SSO_SAML_KEYSTORE value: /etc/secret/sso-app-secret/keystore.jks - name: SSO_SAML_KEYSTORE_PASSWORD value: mykeystorepass - name: SSO_SAML_CERTIFICATE_NAME value: jboss - name: SSO_BEARER_ONLY value: true - name: SSO_CLIENT value: module-name - name: SSO_ENABLE_CORS value: true - name: SSO_SECRET value: KZ1QyIq4 - name: SSO_DISABLE_SSL_CERTIFICATE_VALIDATION value: true - name: SSO_SAML_KEYSTORE_SECRET value: sso-app-secret - name: HTTPS_SECRET value: eap-ssl-secret - name: SSO_TRUSTSTORE_SECRET value: sso-app-secret EOF", "oc get subscription eap-operator -n openshift-operators -o yaml | grep currentCSV currentCSV: eap-operator.v1.0.0", "oc delete subscription eap-operator -n openshift-operators subscription.operators.coreos.com \"eap-operator\" deleted", "oc delete clusterserviceversion eap-operator.v1.0.0 -n openshift-operators clusterserviceversion.operators.coreos.com \"eap-operator.v1.0.0\" deleted", "describe wildflyserver <name>", "USDJBOSS_HOME/standalone/data/ejb-xa-recovery exec <podname> rm -rf USDJBOSS_HOME/standalone/data/ejb-xa-recovery", "Narayana JDBC objectstore configuration via s2i env variables - name: TX_DATABASE_PREFIX_MAPPING value: 'PostgresJdbcObjectStore-postgresql=PG_OBJECTSTORE' - name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_HOST value: 'postgresql' - name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_PORT value: '5432' - name: PG_OBJECTSTORE_JNDI value: 'java:jboss/datasources/PostgresJdbc' - name: PG_OBJECTSTORE_DRIVER value: 'postgresql' - name: PG_OBJECTSTORE_DATABASE value: 'sampledb' - name: PG_OBJECTSTORE_USERNAME value: 'admin' - name: PG_OBJECTSTORE_PASSWORD value: 'admin'", "<datasource jta=\"false\" jndi-name=\"java:jboss/datasources/PostgresJdbcObjectStore\" pool-name=\"postgresjdbcobjectstore_postgresqlObjectStorePool\" enabled=\"true\" use-java-context=\"true\" statistics-enabled=\"USD{wildfly.datasources.statistics-enabled:USD{wildfly.statistics-enabled:false}}\"> <connection-url>jdbc:postgresql://postgresql:5432/sampledb</connection-url> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> </datasource> <!-- under subsystem urn:jboss:domain:transactions --> <jdbc-store datasource-jndi-name=\"java:jboss/datasources/PostgresJdbcObjectStore\"> <!-- the pod name was named transactions-xa-0 --> <action table-prefix=\"ostransactionsxa0\"/> <communication table-prefix=\"ostransactionsxa0\"/> <state table-prefix=\"ostransactionsxa0\"/> </jdbc-store>", "apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: eap-helloworld spec: applicationImage: 'eap-helloworld:latest' replicas: 1 resources: limits: cpu: 500m memory: 2Gi requests: cpu: 100m memory: 1Gi", "autoscale wildflyserver/eap-helloworld --cpu-percent=50 --min=1 --max=10", "get hpa -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE eap-helloworld WildFlyServer/eap-helloworld 217%/50% 1 10 1 4s eap-helloworld WildFlyServer/eap-helloworld 217%/50% 1 10 4 17s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 8 32s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 10 47s eap-helloworld WildFlyServer/eap-helloworld 139%/50% 1 10 10 62s eap-helloworld WildFlyServer/eap-helloworld 180%/50% 1 10 10 92s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 10 2m2s", "USDJBOSS_HOME/bin/add-user.sh", "JAVA_OPTS_APPEND=\"-Dwildfly.config.url=USDJBOSS_HOME/standalone/configuration/eap-config.xml\"", "<configuration> <authentication-client xmlns=\"urn:elytron:1.0\"> <authentication-rules> <rule use-configuration=\"jta\"> <match-abstract-type name=\"jta\" authority=\"jboss\" /> </rule> </authentication-rules> <authentication-configurations> <configuration name=\"jta\"> <sasl-mechanism-selector selector=\"DIGEST-MD5\" /> <providers> <use-service-loader /> </providers> <set-user-name name=\"PASTE_USER_NAME_HERE\" /> <credentials> <clear-password password=\"PASTE_PASSWORD_HERE\" /> </credentials> <set-mechanism-realm name=\"ApplicationRealm\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default
Appendix B. Preparing a Local Manually Configured PostgreSQL Database
Appendix B. Preparing a Local Manually Configured PostgreSQL Database Use this procedure to set up the Manager database. Set up this database before you configure the Manager; you must supply the database credentials during engine-setup . Note The engine-setup and engine-backup --mode=restore commands only support system error messages in the en_US.UTF8 locale, even if the system locale is different. The locale settings in the postgresql.conf file must be set to en_US.UTF8 . Important The database name must contain only numbers, underscores, and lowercase letters. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Initializing the PostgreSQL Database Install the PostgreSQL server package: # dnf install postgresql-server postgresql-contrib Initialize the PostgreSQL database instance: Start the postgresql service, and ensure that this service starts on boot: Connect to the psql command line interface as the postgres user: Create a default user. The Manager's default user is engine and Data Warehouse's default user is ovirt_engine_history : postgres=# create role user_name with login encrypted password ' password '; Create a database. The Manager's default database name is engine and Data Warehouse's default database name is ovirt_engine_history : postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8'; Connect to the new database: postgres=# \c database_name Add the uuid-ossp extension: database_name =# CREATE EXTENSION "uuid-ossp"; Add the plpgsql language if it does not exist: database_name =# CREATE LANGUAGE plpgsql; Quit the psql interface: database_name =# \q Edit the /var/lib/pgsql/data/pg_hba.conf file to enable md5 client authentication, so the engine can access the database locally. Add the following line immediately below the line that starts with local at the bottom of the file: host database_name user_name 0.0.0.0/0 md5 host database_name user_name ::0/0 md5 Update the PostgreSQL server's configuration. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following lines to the bottom of the file: autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192 Restart the postgresql service: # systemctl restart postgresql Optionally, set up SSL to secure database connections. Return to Configuring the Manager , and answer Local and Manual when asked about the database.
[ "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "dnf repolist", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms", "subscription-manager release --set=8.6", "dnf module -y enable postgresql:12", "dnf module -y enable nodejs:14", "dnf distro-sync --nobest", "dnf install postgresql-server postgresql-contrib", "postgresql-setup --initdb", "systemctl enable postgresql systemctl start postgresql", "su - postgres -c psql", "postgres=# create role user_name with login encrypted password ' password ';", "postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';", "postgres=# \\c database_name", "database_name =# CREATE EXTENSION \"uuid-ossp\";", "database_name =# CREATE LANGUAGE plpgsql;", "database_name =# \\q", "host database_name user_name 0.0.0.0/0 md5 host database_name user_name ::0/0 md5", "autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192", "systemctl restart postgresql" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Preparing_a_Local_Manually-Configured_PostgreSQL_Database_SM_localDB_deploy
Chapter 3. Troubleshooting logging
Chapter 3. Troubleshooting logging 3.1. Viewing Logging status You can view the status of the Red Hat OpenShift Logging Operator and other logging components. 3.1.1. Viewing the status of the Red Hat OpenShift Logging Operator You can view the status of the Red Hat OpenShift Logging Operator. Prerequisites The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed. Procedure Change to the openshift-logging project by running the following command: USD oc project openshift-logging Get the ClusterLogging instance status by running the following command: USD oc get clusterlogging instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1 1 In the output, the cluster status fields appear in the status stanza. 2 Information on the Fluentd pods. 3 Information on the Elasticsearch pods, including Elasticsearch cluster health, green , yellow , or red . 4 Information on the Kibana pods. 3.1.1.1. Example condition messages The following are examples of some condition messages from the Status.Nodes section of the ClusterLogging instance. A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {} A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {} A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster: Example output Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: A status message similar to the following indicates that the requested PVC could not bind to PV: Example output Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes: Example output Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready: 3.1.2. Viewing the status of logging components You can view the status for a number of logging components. Prerequisites The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging View the status of logging environment: USD oc describe deployment cluster-logging-operator Example output Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1---- View the status of the logging replica set: Get the name of a replica set: Example output USD oc get replicaset Example output NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m Get the status of the replica set: USD oc describe replicaset cluster-logging-operator-574b8987df Example output Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv---- 3.2. Troubleshooting log forwarding 3.2.1. Redeploying Fluentd pods When you create a ClusterLogForwarder custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy. Prerequisites You have created a ClusterLogForwarder custom resource (CR) object. Procedure Delete the Fluentd pods to force them to redeploy by running the following command: USD oc delete pod --selector logging-infra=collector 3.2.2. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 3.3. Troubleshooting logging alerts You can use the following procedures to troubleshoot logging alerts on your cluster. 3.3.1. Elasticsearch cluster health status is red At least one primary shard and its replicas are not allocated to a node. Use the following procedure to troubleshoot this alert. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Check the Elasticsearch cluster health and verify that the cluster status is red by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health List the nodes that have joined the cluster by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/nodes?v List the Elasticsearch pods and compare them with the nodes in the command output from the step, by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch If some of the Elasticsearch nodes have not joined the cluster, perform the following steps. Confirm that Elasticsearch has an elected master node by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/master?v Review the pod logs of the elected master node for issues by running the following command and observing the output: USD oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging Review the logs of nodes that have not joined the cluster for issues by running the following command and observing the output: USD oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging If all the nodes have joined the cluster, check if the cluster is in the process of recovering by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/recovery?active_only=true If there is no command output, the recovery process might be delayed or stalled by pending tasks. Check if there are pending tasks by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- health | grep number_of_pending_tasks If there are pending tasks, monitor their status. If their status changes and indicates that the cluster is recovering, continue waiting. The recovery time varies according to the size of the cluster and other factors. Otherwise, if the status of the pending tasks does not change, this indicates that the recovery has stalled. If it seems like the recovery has stalled, check if the cluster.routing.allocation.enable value is set to none , by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/settings?pretty If the cluster.routing.allocation.enable value is set to none , set it to all , by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/settings?pretty \ -X PUT -d '{"persistent": {"cluster.routing.allocation.enable":"all"}}' Check if any indices are still red by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/indices?v If any indices are still red, try to clear them by performing the following steps. Clear the cache by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty Increase the max allocation retries by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name>/_settings?pretty \ -X PUT -d '{"index.allocation.max_retries":10}' Delete all the scroll items by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_search/scroll/_all -X DELETE Increase the timeout by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name>/_settings?pretty \ -X PUT -d '{"index.unassigned.node_left.delayed_timeout":"10m"}' If the preceding steps do not clear the red indices, delete the indices individually. Identify the red index name by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/indices?v Delete the red index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_red_index_name> -X DELETE If there are no red indices and the cluster status is red, check for a continuous heavy processing load on a data node. Check if the Elasticsearch JVM Heap usage is high by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_nodes/stats?pretty In the command output, review the node_name.jvm.mem.heap_used_percent field to determine the JVM Heap usage. Check for high CPU utilization. For more information about CPU utilitzation, see the Red Hat OpenShift Service on AWS "Reviewing monitoring dashboards" documentation. Additional resources Reviewing monitoring dashboards Fix a red or yellow cluster status 3.3.2. Elasticsearch cluster health status is yellow Replica shards for at least one primary shard are not allocated to nodes. Increase the node count by adjusting the nodeCount value in the ClusterLogging custom resource (CR). Additional resources Fix a red or yellow cluster status 3.3.3. Elasticsearch node disk low watermark reached Elasticsearch does not allocate shards to nodes that reach the low watermark. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Identify the node on which Elasticsearch is deployed by running the following command: USD oc -n openshift-logging get po -o wide Check if there are unassigned shards by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/health?pretty | grep unassigned_shards If there are unassigned shards, check the disk space on each node, by running the following command: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done In the command output, check the Use column to determine the used disk percentage on that node. Example output elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch \ -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE 3.3.4. Elasticsearch node disk high watermark reached Elasticsearch attempts to relocate shards away from a node that has reached the high watermark to a node with low disk usage that has not crossed any watermark threshold limits. To allocate shards to a particular node, you must free up some space on that node. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Identify the node on which Elasticsearch is deployed by running the following command: USD oc -n openshift-logging get po -o wide Check the disk space on each node: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done Check if the cluster is rebalancing: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/health?pretty | grep relocating_shards If the command output shows relocating shards, the high watermark has been exceeded. The default value of the high watermark is 90%. Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch \ -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE 3.3.5. Elasticsearch node disk flood watermark reached Elasticsearch enforces a read-only index block on every index that has both of these conditions: One or more shards are allocated to the node. One or more disks exceed the flood stage . Use the following procedure to troubleshoot this alert. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Get the disk space of the Elasticsearch node: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done In the command output, check the Avail column to determine the free disk space on that node. Example output elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch \ -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE Continue freeing up and monitoring the disk space. After the used disk space drops below 90%, unblock writing to this node by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_all/_settings?pretty \ -X PUT -d '{"index.blocks.read_only_allow_delete": null}' 3.3.6. Elasticsearch JVM heap usage is high The Elasticsearch node Java virtual machine (JVM) heap memory used is above 75%. Consider increasing the heap size . 3.3.7. Aggregated logging system CPU is high System CPU usage on the node is high. Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 3.3.8. Elasticsearch process CPU is high Elasticsearch process CPU usage on the node is high. Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 3.3.9. Elasticsearch disk space is running low Elasticsearch is predicted to run out of disk space within the 6 hours based on current disk usage. Use the following procedure to troubleshoot this alert. Procedure Get the disk space of the Elasticsearch node: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done In the command output, check the Avail column to determine the free disk space on that node. Example output elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Fix a red or yellow cluster status 3.3.10. Elasticsearch FileDescriptor usage is high Based on current usage trends, the predicted number of file descriptors on the node is insufficient. Check the value of max_file_descriptors for each node as described in the Elasticsearch File Descriptors documentation. 3.4. Viewing the status of the Elasticsearch log store You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components. 3.4.1. Viewing the status of the Elasticsearch log store You can view the status of the Elasticsearch log store. Prerequisites The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed. Procedure Change to the openshift-logging project by running the following command: USD oc project openshift-logging To view the status: Get the name of the Elasticsearch log store instance by running the following command: USD oc get Elasticsearch Example output NAME AGE elasticsearch 5h9m Get the Elasticsearch log store status by running the following command: USD oc get Elasticsearch <Elasticsearch-instance> -o yaml For example: USD oc get Elasticsearch elasticsearch -n openshift-logging -o yaml The output includes information similar to the following: Example output status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all 1 In the output, the cluster status fields appear in the status stanza. 2 The status of the Elasticsearch log store: The number of active primary shards. The number of active shards. The number of shards that are initializing. The number of Elasticsearch log store data nodes. The total number of Elasticsearch log store nodes. The number of pending tasks. The Elasticsearch log store status: green , red , yellow . The number of unassigned shards. 3 Any status conditions, if present. The Elasticsearch log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown: Container Waiting for both the Elasticsearch log store and proxy containers. Container Terminated for both the Elasticsearch log store and proxy containers. Pod unschedulable. Also, a condition is shown for a number of issues; see Example condition messages . 4 The Elasticsearch log store nodes in the cluster, with upgradeStatus . 5 The Elasticsearch log store client, data, and master pods in the cluster, listed under failed , notReady , or ready state. 3.4.1.1. Example condition messages The following are examples of some condition messages from the Status section of the Elasticsearch instance. The following status message indicates that a node has exceeded the configured low watermark, and no shard will be allocated to this node. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that a node has exceeded the configured high watermark, and shards will be relocated to other nodes. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that the Elasticsearch log store node selector in the custom resource (CR) does not match any nodes in the cluster: status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: "True" type: Unschedulable The following status message indicates that the Elasticsearch log store CR uses a non-existent persistent volume claim (PVC). status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable The following status message indicates that your Elasticsearch log store cluster does not have enough nodes to support the redundancy policy. status: clusterHealth: "" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: "True" type: InvalidRedundancy This status message indicates your cluster has too many control plane nodes: status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters The following status message indicates that Elasticsearch storage does not support the change you tried to make. For example: status: clusterHealth: green conditions: - lastTransitionTime: "2021-05-07T01:05:13Z" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored The reason and type fields specify the type of unsupported change: StorageClassNameChangeIgnored Unsupported change to the storage class name. StorageSizeChangeIgnored Unsupported change the storage size. StorageStructureChangeIgnored Unsupported change between ephemeral and persistent storage structures. Important If you try to configure the ClusterLogging CR to switch from ephemeral to persistent storage, the OpenShift Elasticsearch Operator creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the StorageStructureChangeIgnored status, you must revert the change to the ClusterLogging CR and delete the PVC. 3.4.2. Viewing the status of the log store components You can view the status for a number of the log store components. Elasticsearch indices You can view the status of the Elasticsearch indices. Get the name of an Elasticsearch pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of the indices: USD oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices Example output Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0 Log store pods You can view the status of the pods that host the log store. Get the name of a pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of a pod: USD oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw The output includes the following status information: Example output .... Status: Running .... Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 .... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True .... Events: <none> Log storage pod deployment configuration You can view the status of the log store deployment configuration. Get the name of a deployment configuration: USD oc get deployment --selector component=elasticsearch -o name Example output deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3 Get the deployment configuration status: USD oc describe deployment elasticsearch-cdm-1gon-1 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable .... Events: <none> Log store replica set You can view the status of the log store replica set. Get the name of a replica set: USD oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d Get the status of the replica set: USD oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Events: <none> 3.4.3. Elasticsearch cluster status A dashboard in the Observe section of the OpenShift Cluster Manager displays the status of the Elasticsearch cluster. To get the status of the OpenShift Elasticsearch cluster, visit the dashboard in the Observe section of the OpenShift Cluster Manager at <cluster_url>/monitoring/dashboards/grafana-dashboard-cluster-logging . Elasticsearch status fields eo_elasticsearch_cr_cluster_management_state Shows whether the Elasticsearch cluster is in a managed or unmanaged state. For example: eo_elasticsearch_cr_cluster_management_state{state="managed"} 1 eo_elasticsearch_cr_cluster_management_state{state="unmanaged"} 0 eo_elasticsearch_cr_restart_total Shows the number of times the Elasticsearch nodes have restarted for certificate restarts, rolling restarts, or scheduled restarts. For example: eo_elasticsearch_cr_restart_total{reason="cert_restart"} 1 eo_elasticsearch_cr_restart_total{reason="rolling_restart"} 1 eo_elasticsearch_cr_restart_total{reason="scheduled_restart"} 3 es_index_namespaces_total Shows the total number of Elasticsearch index namespaces. For example: Total number of Namespaces. es_index_namespaces_total 5 es_index_document_count Shows the number of records for each namespace. For example: es_index_document_count{namespace="namespace_1"} 25 es_index_document_count{namespace="namespace_2"} 10 es_index_document_count{namespace="namespace_3"} 5 The "Secret Elasticsearch fields are either missing or empty" message If Elasticsearch is missing the admin-cert , admin-key , logging-es.crt , or logging-es.key files, the dashboard shows a status message similar to the following example: message": "Secret \"elasticsearch\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]", "reason": "Missing Required Secrets",
[ "oc project openshift-logging", "oc get clusterlogging instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1", "nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}", "nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}", "Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:", "Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable", "Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:", "oc project openshift-logging", "oc describe deployment cluster-logging-operator", "Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----", "oc get replicaset", "NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m", "oc describe replicaset cluster-logging-operator-574b8987df", "Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----", "oc delete pod --selector logging-infra=collector", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "oc -n openshift-logging get pods -l component=elasticsearch", "export ES_POD_NAME=<elasticsearch_pod_name>", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/nodes?v", "oc -n openshift-logging get pods -l component=elasticsearch", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/master?v", "oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging", "oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/recovery?active_only=true", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health | grep number_of_pending_tasks", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_search/scroll/_all -X DELETE", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_red_index_name> -X DELETE", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_nodes/stats?pretty", "oc -n openshift-logging get pods -l component=elasticsearch", "export ES_POD_NAME=<elasticsearch_pod_name>", "oc -n openshift-logging get po -o wide", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep unassigned_shards", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent", "oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE", "oc -n openshift-logging get pods -l component=elasticsearch", "export ES_POD_NAME=<elasticsearch_pod_name>", "oc -n openshift-logging get po -o wide", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep relocating_shards", "oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE", "oc -n openshift-logging get pods -l component=elasticsearch", "export ES_POD_NAME=<elasticsearch_pod_name>", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent", "oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent", "oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices", "oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE", "oc project openshift-logging", "oc get Elasticsearch", "NAME AGE elasticsearch 5h9m", "oc get Elasticsearch <Elasticsearch-instance> -o yaml", "oc get Elasticsearch elasticsearch -n openshift-logging -o yaml", "status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all", "status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}", "status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}", "status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable", "status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable", "status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy", "status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters", "status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored", "oc get pods --selector component=elasticsearch -o name", "pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7", "oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices", "Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0", "oc get pods --selector component=elasticsearch -o name", "pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7", "oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw", ". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>", "oc get deployment --selector component=elasticsearch -o name", "deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3", "oc describe deployment elasticsearch-cdm-1gon-1", ". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>", "oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d", "oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495", ". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>", "eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0", "eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3", "Total number of Namespaces. es_index_namespaces_total 5", "es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5", "message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\"," ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/troubleshooting-logging
Chapter 7. Infrastructure requirements
Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.14 is supported only on OpenShift Container Platform version 4.14 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both the storage device requirements and have a storage class that provides EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 6.7, Update 2 or later vSphere 7.0 or later. For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner, or VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both the storage device requirements and have a storage class that provides an zzure disk via the azure-disk provisioner. 7.1.5. Google Cloud Platform Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both the storage device requirements and have a storage class that provides a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both the storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both the storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.2.2. IBM FlashSystem To use IBM FlashSystem as a pluggable external storage on other providers, you need to first deploy it before you can deploy OpenShift Data Foundation, which would use the IBM FlashSystem storage class as a backing storage. For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation . For instructions on how to deploy OpenShift Data Foundation, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators, or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.7. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.8. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/infrastructure-requirements_rhodf
Chapter 3. Deployment methods
Chapter 3. Deployment methods You can deploy Streams for Apache Kafka on OpenShift 4.14 and later using one of the following methods: Installation method Description Deployment files (YAML files) Download the deployment files to manually deploy Streams for Apache Kafka components. For the greatest flexibility, choose this method. OperatorHub Deploy the Streams for Apache Kafka Cluster operator through the OperatorHub, then deploy Streams for Apache Kafka components using custom resources. This method provides a standard configuration and allows you to take advantage of automatic updates.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/con-streams-installation-methods-str
Installing Identity Management
Installing Identity Management Red Hat Enterprise Linux 8 Methods of installing IdM servers and clients Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/index
Appendix F. Kafka Connect configuration parameters
Appendix F. Kafka Connect configuration parameters config.storage.topic Type: string Importance: high The name of the Kafka topic where connector configurations are stored. group.id Type: string Importance: high A unique string that identifies the Connect cluster group this worker belongs to. key.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. offset.storage.topic Type: string Importance: high The name of the Kafka topic where connector offsets are stored. status.storage.topic Type: string Importance: high The name of the Kafka topic where connector and task status are stored. value.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. bootstrap.servers Type: list Default: localhost:9092 Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). heartbeat.interval.ms Type: int Default: 3000 (3 seconds) Importance: high The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. rebalance.timeout.ms Type: int Default: 60000 (1 minute) Importance: high The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures. session.timeout.ms Type: int Default: 10000 (10 seconds) Importance: high The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms . ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file. This is optional for client. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . If set to default (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses. connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. connector.client.config.override.policy Type: string Default: None Importance: medium Class name or alias of implementation of ConnectorClientConfigOverridePolicy . Defines what client configurations can be overriden by the connector. The default implementation is None . The other possible policies in the framework include All and Principal . receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [0,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: 'loginModuleClass controlFlag (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [0,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. worker.sync.timeout.ms Type: int Default: 3000 (3 seconds) Importance: medium When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. worker.unsync.backoff.ms Type: int Default: 300000 (5 minutes) Importance: medium When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. access.control.allow.methods Type: string Default: "" Importance: low Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. access.control.allow.origin Type: string Default: "" Importance: low Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API. admin.listeners Type: list Default: null Valid Values: org.apache.kafka.connect.runtime.WorkerConfigUSDAdminListenersValidator@6fffcba5 Importance: low List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property). client.id Type: string Default: "" Importance: low An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. config.providers Type: list Default: "" Importance: low Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvider allows you to replace variable references in connector configurations, such as for externalized secrets. config.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the configuration storage topic. connect.protocol Type: string Default: sessioned Valid Values: [eager, compatible, sessioned] Importance: low Compatibility mode for Kafka Connect Protocol. header.converter Type: class Default: org.apache.kafka.connect.storage.SimpleHeaderConverter Importance: low HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas. inter.worker.key.generation.algorithm Type: string Default: HmacSHA256 Valid Values: Any KeyGenerator algorithm supported by the worker JVM Importance: low The algorithm to use for generating internal request keys. inter.worker.key.size Type: int Default: null Importance: low The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used. inter.worker.key.ttl.ms Type: int Default: 3600000 (1 hour) Valid Values: [0,... ,2147483647] Importance: low The TTL of generated session keys used for internal request validation (in milliseconds). inter.worker.signature.algorithm Type: string Default: HmacSHA256 Valid Values: Any MAC algorithm supported by the worker JVM Importance: low The algorithm used to sign internal requests. inter.worker.verification.algorithms Type: list Default: HmacSHA256 Valid Values: A list of one or more MAC algorithms, each supported by the worker JVM Importance: low A list of permitted algorithms for verifying internal requests. internal.key.converter Type: class Default: org.apache.kafka.connect.json.JsonConverter Importance: low Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version. internal.value.converter Type: class Default: org.apache.kafka.connect.json.JsonConverter Importance: low Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version. listeners Type: list Default: null Importance: low List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. offset.flush.interval.ms Type: long Default: 60000 (1 minute) Importance: low Interval at which to try committing offsets for tasks. offset.flush.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. offset.storage.partitions Type: int Default: 25 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the offset storage topic. offset.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the offset storage topic. plugin.path Type: list Default: null Importance: low List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: a) directories immediately containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) directories immediately containing the package directory structure of classes of plugins and their dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. response.http.headers.config Type: string Default: "" Valid Values: Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma Importance: low Rules for REST API HTTP response headers. rest.advertised.host.name Type: string Default: null Importance: low If this is set, this is the hostname that will be given out to other workers to connect to. rest.advertised.listener Type: string Default: null Importance: low Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use. rest.advertised.port Type: int Default: null Importance: low If this is set, this is the port that will be given out to other workers to connect to. rest.extension.classes Type: list Default: "" Importance: low Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. rest.host.name Type: string Default: null Importance: low Hostname for the REST API. If this is set, it will only bind to this interface. rest.port Type: int Default: 8083 Importance: low Port for the REST API to listen on. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. scheduled.rebalance.max.delay.ms Type: int Default: 300000 (5 minutes) Valid Values: [0,... ,2147483647] Importance: low The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Importance: low Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. status.storage.partitions Type: int Default: 5 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the status storage topic. status.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the status storage topic. task.shutdown.graceful.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. topic.creation.enable Type: boolean Default: true Importance: low Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with topic.creation. properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically. topic.tracking.allow.reset Type: boolean Default: true Importance: low If set to true, it allows user requests to reset the set of active topics per connector. topic.tracking.enable Type: boolean Default: true Importance: low Enable tracking the set of active topics per connector during runtime.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/kafka-connect-configuration-parameters-str
function::inet_get_ip_source
function::inet_get_ip_source Name function::inet_get_ip_source - Provide IP source address string for a kernel socket Synopsis Arguments sock pointer to the kernel socket
[ "inet_get_ip_source:string(sock:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-inet-get-ip-source
Installing on IBM Power
Installing on IBM Power OpenShift Container Platform 4.12 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_power/index
Chapter 3. Recovering multiple servers with replication
Chapter 3. Recovering multiple servers with replication If multiple servers are lost at the same time, determine if the environment can be rebuilt by seeing which one of the following five scenarios applies to your situation. 3.1. Recovering from losing multiple servers in a CA-less deployment Servers in a CA-less deployment are all considered equal, so you can rebuild the environment by removing and replacing lost replicas in any order. Prerequisites Your deployment uses an external Certificate Authority (CA). Procedure See Recovering from losing a regular replica . 3.2. Recovering from losing multiple servers when the CA renewal server is unharmed If the CA renewal server is intact, you can replace other servers in any order. Prerequisites Your deployment uses the IdM internal Certificate Authority (CA). Procedure See Recovering from losing a regular replica . 3.3. Recovering from losing the CA renewal server and other servers If you lose the CA renewal server and other servers, promote another CA server to the CA renewal server role before replacing other replicas. Prerequisites Your deployment uses the IdM internal Certificate Authority (CA). At least one CA replica is unharmed. Procedure Promote another CA replica to fulfill the CA renewal server role. See Recovering from losing the CA renewal server . Replace all other lost replicas. See Recovering from losing a regular replica .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/performing_disaster_recovery_with_identity_management/recovering-multiple-servers-with-replication_performing-disaster-recovery
22.14.2. Configure the Firewall Using the Command Line
22.14.2. Configure the Firewall Using the Command Line To enable NTP to pass through the firewall using the command line, issue the following command as root : Note that this will restart the firewall as long as it has not been disabled with the --disabled option. Active connections will be terminated and time out on the initiating machine. When preparing a configuration file for multiple installations using administration tools, it is useful to edit the firewall configuration file directly. Note that any mistakes in the configuration file could have unexpected consequences, cause an error, and prevent the firewall setting from being applied. Therefore, check the /etc/sysconfig/system-config-firewall file thoroughly after editing. To enable NTP to pass through the firewall, by editing the configuration file, become the root user and add the following line to /etc/sysconfig/system-config-firewall : Note that these changes will not take effect until the firewall is reloaded or the system restarted. 22.14.2.1. Checking Network Access for Incoming NTP Using the Command Line To check if the firewall is configured to allow incoming NTP traffic for clients using the command line, issue the following command as root: In this example taken from a default installation, the firewall is enabled but NTP has not been allowed to pass through. Once it is enabled, the following line appears as output in addition to the lines shown above: To check if the firewall is currently allowing incoming NTP traffic for clients, issue the following command as root :
[ "~]# lokkit --port=123:udp --update", "--port=123:udp", "~]# less /etc/sysconfig/system-config-firewall Configuration file for system-config-firewall --enabled --service=ssh", "--port=123:udp", "~]# iptables -L -n | grep 'udp.*123' ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:123" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-configure_the_firewall_using_the_cli
Chapter 3. Integrating RHOSO networking services
Chapter 3. Integrating RHOSO networking services Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Red Hat only certifies Red Hat OpenStack Services on OpenShift (RHOSO) Networking drivers that are distributed by Red Hat. Conversely, Red Hat does not certify drivers that are distributed directly by the Partner. To integrate a Networking driver with RHOSO, you must perform the following actions: Build a neutron-api container with the Networking driver. Configure the Networking driver in the neutron-api component. Configure additional software dependencies, such as agents on the External DataPlane Nodes (EDPM), that are required by the driver. Access extra files that are required by the driver. 3.1. Configure the networking driver Red Hat OpenStack Services on OpenShift (RHOSO) uses OpenShift custom resource definitions (CRDs), which you deploy by using OpenStackControlPlane , OpenStackDataPlaneDeployment , and OpenStackDataPlaneNodeSet custom resources (CR). The OpenStackControlPlane CR includes specification templates that govern the openstack-neutron API service deployment, which include sections for configuring networking drivers. The OpenStackDataPlaneNodeSet CR includes specification templates that govern components that are deployed on the EDPM nodes. For information about configuring and deploying the openstack-neutron service, see the Configuring networking services guide . 3.2. Prepare neutron-api component with the new networking driver Red Hat OpenStack Services on OpenShift (RHOSO) openstack-neutron services execute in Linux containers that are built by Red Hat. These container images include the "in-tree" networking drivers such as ML2/OVN. You must prepare new container images to use Partner networking drivers. Partners must provide a container image that adds an additional layer on top of Red Hat's RHOSO container image. Partner container images for RHOSO are similar to a Partner's container images for director-based RHOSP. The purpose of providing a Partner container image is to provide software dependencies required by the Partner's driver. Partners are responsible for generating their container images, and the image has to go through the container image certification procedure before the Red Hat OpenStack certification. The container image certification is separate from Red Hat OpenStack certification and is a requirement for inclusion in the Red Hat Ecosystem Catalog . After a Partner's networking driver has passed Red Hat OpenStack Certification, the Partner is responsible for generating a certified container image for every subsequent minor update to the RHOSO release. For each minor RHOSO 18 update, Partners must generate an updated container image for the updated release, and publish the updated container image in the Red Hat Ecosystem Catalog. Container images for older RHOSO 18 minor updates must remain in the Red Hat Ecosystem Catalog. This ensures that customers that are not using the latest RHOSO release can still access the Partner's container image that was built for their RHOSO version. 3.2.1. Building partner container images for networking services A Partner must provide a Red Hat certified container image. A neutron-server service includes "in-tree" drivers for the OVN back end and many service plugins. Partners must provide a neutron-server container image to layer networking driver software on top of the RHOSO neutron-server container image. Note Depending on your plug-in or driver, you might also need to deploy additional services or agents on the data plane or the control plane nodes. For the control plane you can add a new OpenShift operator using operator-sdk , and for the data plane you can add a new OpenStackDataPlaneService . You can build the OpenStackDataPlaneService service on official Neutron agent images and edpm-ansible roles. You are responsible for the stability of the interface for new custom services. Procedure Create a Containerfile for generating the container image: The following example shows a sample Containerfile or Dockerfile for generating a neutron-server container image that includes external software dependencies that are required by a Partner's openstack-neutron driver. The example can be adapted to generate any other neutron-related container image that includes external software dependencies that are required by a Partner's openstack-neutron driver. 1 Use the FROM clause to specify the RHOSO base image, which in this example is the neutron-server service. The 18.0.1 tag specifies the release. To generate an image based on a specific minor release, modify the tag to specify that release, for example 18.0.0, or openstack-neutron-server-rhel9:*18.0.0*. For RHOSO 18 GA, use the URL: registry.redhat.io/rhoso/openstack-neutron-server-rhel9:18.0. 2 The labels in the sample Containerfile override the corresponding labels in the base image to uniquely identify the Partner's image. 3 You can install the software dependencies by this method, or the method at 4, 5, or 6. 4 You can install the software dependencies by this method, or the method at 3, 5, or 6. 5 You can install the software dependencies by this method, or the method at 3, 4, or 6. 6 You can install the software dependencies by this method, or the method at 3, 4, or 5. Build, tag, and upload the container image. You can use the podman build or buildah build commands to build the container image. For more information on how Partners chose a registry and provide an access token to the registry for the certification, see the Red Hat Software Certification Workflow Guide . Tag the image to match the corresponding RHOSO 18 base image. For example, when the base image is version 18.0.0, the Partner's image is also tagged as version 18.0.0. You can also use the above example procedure with the other neutron services. Ensure that you use the appropriate RHOSO openstack-neutron base image in place of the openstack-neutron-server base image. Certify and publish the container image: For information on how to certify the container image, see Red Hat Enterprise Linux Software Certification Policy Guide and Red Hat Software Certification Workflow Guide . You can publish container images in the Red Hat Ecosystem Catalog . 3.2.2. Maintain partner container images and image tags When a Partner certifies their networking solution that includes a container image, then the Partner is responsible for rebuilding that image every time the underlying Red Hat OpenStack Services on OpenShift (RHOSO) container image changes. The Partner must rebuild the container image: With every RHOSO maintenance release. When RHOSO container images are updated to address a CVE. For example, if a Partner certified their solution against RHOSO 18.0.1, the Partner must add two tags to the container image: 18.0.1 to indicate the specific release. 18.0 to indicate this is the latest version associated with RHOSO 18. When RHOSO 18.0.2 releases, the Partner must rebuild their image and update the images and tags: The tag for the new image is 18.0.2. The older 18.0.1 image must remain in the Red Hat Ecosystem Catalog. Partners must not remove old images. Remove the 18.0 tag from the older 18.0.1 image, and add it to the new 18.0.2 image. 3.2.3. Deploy partner container images With Red Hat OpenStack Services on OpenShift (RHOSO), you can use the OpenStackVersion custom resource definition (CRD) to override the container image for any service. In the following example, the CRD configures a custom image for the neutron-server to use the container image of a Partner named "PartnerX". 3.3. Set custom configuration for the networking driver You can use the OpenStackControlPlane custom resource definition (CRD) extraMounts feature to provide files to the openstack-neutron networking service. One example is if a Partner's openstack-neutron driver requires an additional configuration file containing authentication credentials to enable access to the Partner's back-end networking devices. You can store the contents of the file in a Kubernetes secret, which you can create from a YAML file: apiVersion: v1 kind: Secret metadata: name: cinder-volume-example-config 1 type: Opaque stringData: partner_config.ini: | 2 example_credentials=example 3 1 The secret name is arbitrary, but this example includes the networking service and the Partner's name. 2 The name of the file required by the example networking driver is partner_config.ini . 3 Example ini file data. The following example shows an extraMounts entry in the neutron section of the OpenStackControlPlane CR to mount the config.ini into the neutron-server pod. apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: neutron: template: databaseAccount: neutron databaseInstance: openstack memcachedInstance: memcached networkAttachments: - internalapi passwordSelectors: service: NeutronPassword rabbitMqClusterName: rabbitmq replicas: 1 secret: osp-secret serviceUser: neutron ml2MechanismDrivers: - partner-mech-driver 1 customServiceConfig: | 2 [example] foo=bar extraMounts: 3 - extraVol: - mounts: - name: partner-config mountPath: /etc/neutron/neutron.conf.d/partner_config.ini 4 subPath: partner_config.ini 5 readOnly: true volumes: - name: partner-config secret: secretName: neutron-server-partnerX-config 6 1 In a deployment where the Partner's networking driver integrates with the ML2 networking by using a custom mechanism driver, you can use this section to configure additional mechanism drivers. 2 Use this section to set the neutron-server configuration of the Partner driver. You can use this section to provide custom configuration in the CR. You should use secrets to provide information such as credentials. Note that secrets are stored unencrypted by default in the Kubernetes API server. For more information about Kubernetes secrets, see kubernetes documentation . 3 Use this section to set the extraMounts configuration for the neutron section of the OpenStackControlPlane . 4 Use this section to set the mount point where the partner_config.ini file appears in the neutron-server pod . 5 Use this section to set the subPath to specify the partner_config.ini filename. This is necessary to mount a single file in the /etc/neutron/neutron.conf.d directory. 6 The secretName value matches the name of the secret that you created previously. 3.4. Deploy the networking driver agent on the EDPM nodes If your networking solution requires a specific agent running on the External DataPlane Nodes (EDPM), you can deploy on such nodes by using a custom OpenStackDataPlaneService custom resource (CR) and adding the agent to the OpenStackDataPlaneNodeSet CR. If your networking solution requires a specific agent running on the Red Hat OpenShift Container Platform (RHOCP) nodes with the neutron-server , you can add a custom Kubernetes operator by using operator-sdk . Ensure that you provide specific custom resource definitions (CRDs) that are not integrated with the Red Hat OpenStack Services on OpenShift (RHOSO) openstack-operator for the new operator. 3.4.1. Building partner ansible-runner image You can build a Partner ansible-runner image so that you can streamline Ansible tasks that are required to install and configure the Networking agents. Procedure Create a Containerfile for generating the container image. The following example shows a sample Containerfile or Dockerfile for generating a ansibleee-runner container image that includes additional roles and playbooks that are required by a Partner's OpenStackDataPlaneService . 1 Add Partner's role to the container image. 2 Add the playbook to run the Partner's role to the container image. This step is optional. You can pass playbook content to the service directly in the OpenStackDataPlaneService file. For more information see, Customizing the data plane in Customizing the Red Hat OpenStack Services on OpenShift deployment . Build, tag, and upload the container image. You can use the podman build or buildah build commands to build the container image. For more information on how to choose a registry and provide an access token to the registry for the certification, see the Red Hat Software Certification Workflow Guide . Tag the image to match the corresponding RHOSO 18 base image. For example, when the base image is version 18.0, the Partner's image is also tagged as version 18.0. Certify and publish the container image: For information on how to certify the container image, see Red Hat Enterprise Linux Software Certification Policy Guide and Red Hat Software Certification Workflow Guide . You can publish container images in the Red Hat Ecosystem Catalog . 3.4.2. Deploy the partner solution on the EDPM nodes The following example shows a custom service definition (CRD) that you might use to deploy the Partner solution: 1 The name of the custom service, that is used in the OpenStackDataPlaneNodeSet CR to add the service to the service list. 2 The playbook that runs as part of the custom service deployment. It must be available in the custom ansibleee image. 3 The custom container image, that the ansible-runner execution environment uses to execute Ansible. For more information about how to define a custom service, see Customizing the data plane in Customizing the Red Hat OpenStack Services on OpenShift deployment . The following example shows how to enable a custom service in the EDPM NodeSet. To deploy a custom service on the EDPM nodes, you must include it in the services list in the OpenStackDataPlaneNodeSet CR: 1 The name of the service matches the service created previously. Add the service name in the order of execution relative to the other services. This example shows deploys the neutron-custom-partner-service after run-os and before the libvirt service. This order is important for dependencies because services are executed simply in the order from the list.
[ "FROM registry.redhat.io/rhoso/openstack-neutron-server-rhel9:18.0.1 1 LABEL name=\"rhoso18/openstack-neutron-server-partnerX-plugin\" maintainer=\"[email protected]\" vendor=\"PartnerX\" summary=\"RHOSO 18.0 neutron-server PartnerX PluginY\" description=\"RHOSO 18.0 neutron-server PartnerX PluginY\" 2 Switch to root to install software dependencies USER root Enable a repo to install a package 3 COPY vendorX.repo /etc/yum.repos.d/vendorX.repo RUN dnf clean all && dnf install -y vendorX-plugin Install a package over the network 4 RUN dnf install -y http://vendorX.com/partnerX-plugin.rpm Install a local package 5 COPY partnerX-plugin.rpm /tmp RUN dnf install -y /tmp/partnerX-plugin.rpm && rm -f /tmp/partnerX-plugin.rpm Install a python package from PyPI 6 RUN curl -OL https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py --no-setuptools --no-wheel && pip3 install partnerX-plugin && rm -f get-pip.py Add required license as text file(s) in /licenses directory (GPL, MIT, APACHE, Partner End User Agreement, etc) RUN mkdir /licenses COPY licensing.txt /licenses Switch to neutron user USER neutron", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: neutronAPIImage: registry.connect.redhat.com/partnerX/openstack-neutron-server-partnerX-plugin:18.0.1", "apiVersion: v1 kind: Secret metadata: name: cinder-volume-example-config 1 type: Opaque stringData: partner_config.ini: | 2 example_credentials=example 3", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: neutron: template: databaseAccount: neutron databaseInstance: openstack memcachedInstance: memcached networkAttachments: - internalapi passwordSelectors: service: NeutronPassword rabbitMqClusterName: rabbitmq replicas: 1 secret: osp-secret serviceUser: neutron ml2MechanismDrivers: - partner-mech-driver 1 customServiceConfig: | 2 [example] foo=bar extraMounts: 3 - extraVol: - mounts: - name: partner-config mountPath: /etc/neutron/neutron.conf.d/partner_config.ini 4 subPath: partner_config.ini 5 readOnly: true volumes: - name: partner-config secret: secretName: neutron-server-partnerX-config 6", "FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest COPY neutron_agent_partner_role /usr/share/ansible/roles/neutron_agent_partner_role 1 COPY playbooks/neutron_agent_partner.yaml /usr/share/ansible/collections/ansible_collections/osp/edpm/playbooks/ 2", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-custom-partner-service 1 spec: label: dataplane-deployment-neutron-custom-partner-service playbook: osp.edpm.neutron_agent_partner 2 openStackAnsibleEERunnerImage: openstack-ansibleee-partnerX-runner:latest 3", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm spec: services: - download-cache - bootstrap - configure-network - validate-network - install-os - configure-os - run-os - neutron-custom-partner-service 1 - libvirt - nova nodes: edpm-compute: ansible: ansibleHost: 172.20.12.67 ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret ansibleUser: cloud-admin ansibleVars: ansible_ssh_transfer_method: scp ctlplane_ip: 172.20.12.67 external_ip: 172.20.12.76 fqdn_internalapi: edpm-compute-1.example.com internalapi_ip: 172.17.0.101 storage_ip: 172.18.0.101 tenant_ip: 172.10.0.101 hostName: edpm-compute-0 networkConfig: {} nova: cellName: cell1 deploy: true novaInstance: nova" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/integrating_partner_content/integrating-rhoso-networking-services_osp
Chapter 14. Storage
Chapter 14. Storage DM rebase to version 4.2 Device Mapper (DM) has been upgraded to upstream version 4.2, which provides a number of bug fixes and enhancements over the version including a significant DM crypt performance update and DM core update to support Multi-Queue Block I/O Queueing Mechanism (blk-mq). Multiqueue I/O scheduling with blk-mq Red Hat Enterprise Linux 7.2 includes a new multiple queue I/O scheduling mechanism for block devices known as blk-mq. It can improve performance by allowing certain device drivers to map I/O requests to multiple hardware or software queues. The improved performance comes from reducing lock contention present when multiple threads of execution perform I/O to a single device. Newer devices, such as Non-Volatile Memory Express (NVMe), are best positioned to take advantage of this feature due to their native support for multiple hardware submission and completion queues, and their low-latency performance characteristics. Performance gains, as always, will depend on the exact hardware and workload. The blk-mq feature is currently implemented, and enabled by default, in the following drivers: virtio-blk, mtip32xx, nvme, and rbd. The related feature, scsi-mq, allows Small Computer System Interface (SCSI) device drivers to use the blk-mq infrastructure. The scsi-mq feature is provided as a Technology Preview in Red Hat Enterprise Linux 7.2. To enable scsi-mq, specify scsi_mod.use_blk_mq=y on the kernel command line. The default value is n (disabled). The device mapper (DM) multipath target, which uses request-based DM, can also be configured to use the blk-mq infrastructure if the dm_mod.use_blk_mq=y kernel option is specified. The default value is n (disabled). It may be beneficial to set dm_mod.use_blk_mq=y if the underlying SCSI devices are also using blk-mq, as doing so reduces locking overhead at the DM layer. To determine whether DM multipath is using blk-mq on a system, cat the file /sys/block/dm-X/dm/use_blk_mq , where dm-X is replaced by the DM multipath device of interest. This file is read-only and reflects what the global value in /sys/module/dm_mod/parameters/use_blk_mq was at the time the request-based DM multipath device was created. New delay_watch_checks and delay_wait_checks options in the multipath.conf file Should a path be unreliable, as when the connection drops in and out frequently, multipathd will still continuously attempt to use that path. The timeout before multipathd realizes that the path is no longer accessible is 300 seconds, which can give the appearance that multipathd has stalled. To fix this, two new configuration options have been added: delay_watch_checks and delay_wait_checks. Set the delay_watch_checks to how many cycles multipathd is to watch the path for after it comes online. Should the path fail in under that assigned value, multipathd will not use it. multipathd will then rely on the delay_wait_checks option to tell it how many consecutive cycles it must pass until the path becomes valid again. This prevents unreliable paths from immediately being used as soon as they come back online. New config_dir option in the multipath.conf file Users were unable to split their configuration between /etc/multipath.conf and other configuration files. This prevented users from setting up one main configuration file for all their machines and keep machine-specific configuration information in separate configuration files for each machine. To address this, a new config_dir option was added in the multipath.config file. Users must change the config_dir option to either an empty string or a fully qualified directory path name. When set to anything other than an empty string, multipath will read all .conf files in alphabetical order. It will then apply the configurations exactly as if they had been added to the /etc/multipath.conf. If this change is not made, config_dir defaults to /etc/multipath/conf.d. New dmstats command to display and manage I/O statistics for regions of devices that use the device-mapper driver The dmstats command provides userspace support for device-mapper I/O statistics. This allows a user to create, manage and report I/O counters, metrics and latency histogram data for user-defined arbitrary regions of device-mapper devices. Statistics fields are now available in dmsetup reports and the dmstats command adds new specialized reporting modes designed for use with statistics information. For information on the dmstats command, see the dmstats(8) man page. LVM Cache LVM cache has been fully supported since Red Hat Enterprise Linux 7.1. This feature allows users to create logical volumes (LVs) with a small fast device performing as a cache to larger slower devices. Refer to the lvmcache(7) manual page for information on creating cache logical volumes. Note the following restrictions on the use of cache LVs: * The cache LV must be a top-level device. It cannot be used as a thin-pool LV, an image of a RAID LV, or any other sub-LV type. * The cache LV sub-LVs (the origin LV, metadata LV, and data LV) can only be of linear, stripe, or RAID type. * The properties of the cache LV cannot be changed after creation. To change cache properties, remove the cache as described in lvmcache(7) and recreate it with the desired properties. New LVM/DM cache policy A new smq dm-cache policy has been written that the reduces memory consumption and improves performance for most use cases. It is now the default cache policy for new LVM cache logical volumes. Users who prefer to use the legacy mq cache policy can still do so by supplying the -cachepolicy argument when creating the cache logical volume. LVM systemID LVM volume groups can now be assigned an owner. The volume group owner is the system ID of a host. Only the host with the given system ID can use the VG. This can benefit volume groups that exist on shared devices, visible to multiple hosts, which are otherwise not protected from concurrent use from multiple hosts. LVM volume groups on shared devices with an assigned system ID are owned by one host and protected from other hosts. New lvmpolld daemon The lvmpolld daemon provides a polling method for long-running LVM commands. When enabled, control of long-running LVM commands is transferred from the original LVM command to the lvmpolld daemon. This allows the operation to continue independent of the original LVM command. The lvmpolld daemon is enabled by default. Before the introduction of the lvmpolld daemon, any background polling process originating in an lvm2 command initiated inside a cgroup of a systemd service could get killed if the main process (the main service) exited in the cgroup . This could lead to premature termination of the lvm2 polling process. Additionally, lvmpolld helps to prevent spawning lvm2 polling processes querying for progress on the same task multiple times because it tracks the progress for all polling tasks in progress. For further information on the lvmpolld daemon, see the lvm.conf configuration file. Enhancements to LVM selection criteria The Red Hat Enterprise Linux 7.2 release supports several enhancements to LVM selection criteria. Previously, it was possible to use selection criteria only for reporting commands; LVM now supports selection criteria for several LVM processing commands as well. Additionally, there are several changes in this release to provide better support for time reporting fields and selection. For information on the implementation of these new features, see the LVM Selection Criteria appendix in the Logical Volume Administration manual. The default maximum number of SCSI LUNs is increased The default value for the max_report_luns parameter has been increased from 511 to 16393. This parameter specifies the maximum number of logical units that may be configured when the systems scans the SCSI interconnect using the Report LUNs mechanism.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/storage
Chapter 3. Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure
Chapter 3. Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure In OpenShift Container Platform version 4.14, you can install a cluster on Microsoft Azure Stack Hub with an installer-provisioned infrastructure. However, you must manually configure the install-config.yaml file to specify values that are specific to Azure Stack Hub. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.6. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must manually create your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 3.6.1. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{"auths": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 10 12 14 17 18 20 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 9 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 11 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 13 The name of the resource group that contains the DNS zone for your base domain. 15 The name of your Azure Stack Hub local region. 16 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 21 The pull secret required to authenticate your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 3.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating cloud provider resources with manually maintained credentials 3.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 3.12. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 3.14. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')", "curl -O -L USD{COMPRESSED_VHD_URL}", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure_stack_hub/installing-azure-stack-hub-default
Chapter 14. Unique UID and GID Number Assignments
Chapter 14. Unique UID and GID Number Assignments An IdM server generates user ID (UID) and group ID (GID) values and simultaneously ensures that replicas never generate the same IDs. The need for unique UIDs and GIDs might even be across IdM domains, if a single organization uses multiple separate domains. 14.1. ID Ranges The UID and GID numbers are divided into ID ranges . By keeping separate numeric ranges for individual servers and replicas, the chances are minimal that an ID value issued for an entry is already used by another entry on another server or replica. The Distributed Numeric Assignment (DNA) plug-in, as part of the back end 389 Directory Server instance for the domain, ensures that ranges are updated and shared between servers and replicas; the plug-in manages the ID ranges across all masters and replicas. Every server or replica has a current ID range and an additional ID range that the server or replica uses after the current range has been depleted. For more information about the DNA Directory Server plug-in, see the Red Hat Directory Server Deployment Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-unique_uid_and_gid_attributes
Chapter 9. Authorization for Enrolling Certificates (Access Evaluators)
Chapter 9. Authorization for Enrolling Certificates (Access Evaluators) This chapter describes the authorization mechanism using access evaluators. Note For instructions on how to edit certificate enrollment profiles, see Section 3.2, "Setting up certificate profiles" . 9.1. Authorization Mechanism In addition to the authentication mechanism, each enrollment profile can be configured to have its own authorization mechanism. The authorization mechanism is executed only after a successful authentication. The authorization mechanism is provided by the Access Evaluator plugin framework. Access evaluators are pluggable classes that are used for evaluating access control instructions (ACI) entries. The mechanism provides an evaluate method that takes a predefined list of arguments (that is, type , op , value ), evaluates an expression such as group='Certificate Manager Agents' and returns a boolean depending on the result of evaluation. 9.2. Default Evaluators Red Hat Certificate System provides four default evaluators. The following entries are listed by default in the CS.cfg file: The group access evaluator evaluates the group membership properties of a user. For example, in the following enrollment profile entry, only the CA agents are allowed to go through enrollment with that profile: The ipaddress access evaluator evaluates the IP address of the requesting subject. For example, in the following enrollment profile entry, only the host bearing the specified IP address can go through enrollment with that profile: The user access evaluator evaluates the user ID for exact match. For example, in the following enrollment profile entry, only the user matching the listed user is allowed to go through enrollment with that profile: The user_origreq access evaluator evaluates the authenticated user against a matching request for equality. This special evaluator is designed specifically for renewal purpose to make sure the user requesting the renewal is the same user that owns the original request. For example, in the following renewal enrollment profile entry, the UID of the authenticated user must match the UID of the user requesting the renewal: New evaluators can be written in the current framework and can be registered through the CS console. The default evaluators can be used as templates to expand and customize into more targeted plugins.
[ "accessEvaluator.impl.group.class=com.netscape.cms.evaluators.GroupAccessEvaluator accessEvaluator.impl.ipaddress.class=com.netscape.cms.evaluators.IPAddressAccessEvaluator accessEvaluator.impl.user.class=com.netscape.cms.evaluators.UserAccessEvaluator accessEvaluator.impl.user_origreq.class=com.netscape.cms.evaluators.UserOrigReqAccessEvaluator", "authz.acl=group=\"Certificate Manager Agents\"", "authz.acl=ipaddress=\"a.b.c.d.e.f\"", "authz.acl=user=\"bob\"", "authz.acl=user_origreq=\"auth_token.uid\"" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/authorization_for_enrolling_certificates
Chapter 1. High Availability Add-On Overview
Chapter 1. High Availability Add-On Overview The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services. The following sections provide a high-level description of the components and functions of the High Availability Add-On: Section 1.1, "Cluster Basics" Section 1.2, "High Availability Add-On Introduction" Section 1.4, "Pacemaker Architecture Components" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2 (part of the Resilient Storage Add-On). High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its High Availability Service Management component, Pacemaker . Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Load balancing is available with the Load Balancer Add-On. High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described. Additionally, the Red Hat Enterprise Linux High Availability Add-On contains support for configuring and managing high availability servers only . It does not support high-performance clusters.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/ch-introduction-HAAO
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1]
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the desired quota status object Status defines the actual enforced quota and its current usage 3.1.1. .spec Description Spec defines the desired quota Type object Required quota selector Property Type Description quota object Quota defines the desired quota selector object Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. 3.1.2. .spec.quota Description Quota defines the desired quota Type object Property Type Description hard integer-or-string hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 3.1.3. .spec.quota.scopeSelector Description scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 3.1.4. .spec.quota.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 3.1.5. .spec.quota.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required operator scopeName Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. scopeName string The name of the scope that the selector applies to. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.selector Description Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. Type object Property Type Description annotations undefined (string) AnnotationSelector is used to select projects by annotation. labels `` LabelSelector is used to select projects by label. 3.1.7. .status Description Status defines the actual enforced quota and its current usage Type object Required total Property Type Description namespaces `` Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. total object Total defines the actual enforced quota and its current usage across all projects 3.1.8. .status.total Description Total defines the actual enforced quota and its current usage across all projects Type object Property Type Description hard integer-or-string Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used integer-or-string Used is the current observed total usage of the resource in the namespace. 3.2. API endpoints The following API endpoints are available: /apis/quota.openshift.io/v1/clusterresourcequotas DELETE : delete collection of ClusterResourceQuota GET : list objects of kind ClusterResourceQuota POST : create a ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas GET : watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} DELETE : delete a ClusterResourceQuota GET : read the specified ClusterResourceQuota PATCH : partially update the specified ClusterResourceQuota PUT : replace the specified ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} GET : watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status GET : read status of the specified ClusterResourceQuota PATCH : partially update status of the specified ClusterResourceQuota PUT : replace status of the specified ClusterResourceQuota 3.2.1. /apis/quota.openshift.io/v1/clusterresourcequotas Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterResourceQuota Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterResourceQuota Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterResourceQuota Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 202 - Accepted ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.2. /apis/quota.openshift.io/v1/watch/clusterresourcequotas Table 3.9. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 3.10. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} Table 3.11. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota Table 3.12. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterResourceQuota Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.14. Body parameters Parameter Type Description body DeleteOptions schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterResourceQuota Table 3.16. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.17. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterResourceQuota Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterResourceQuota Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.4. /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status Table 3.27. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota Table 3.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterResourceQuota Table 3.29. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.30. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterResourceQuota Table 3.31. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.32. Body parameters Parameter Type Description body Patch schema Table 3.33. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterResourceQuota Table 3.34. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.35. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.36. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/schedule_and_quota_apis/clusterresourcequota-quota-openshift-io-v1
Chapter 3. Cluster capabilities
Chapter 3. Cluster capabilities Cluster administrators can use cluster capabilities to enable or disable optional components prior to installation. Cluster administrators can enable cluster capabilities at anytime after installation. Note Cluster administrators cannot disable a cluster capability after it is enabled. 3.1. Selecting cluster capabilities You can select cluster capabilities by following one of the installation methods that include customizing your cluster, such as "Installing a cluster on AWS with customizations" or "Installing a cluster on GCP with customizations". During a customized installation, you create an install-config.yaml file that contains the configuration parameters for your cluster. Note If you customize your cluster by enabling or disabling specific cluster capabilities, you are responsible for manually maintaining your install-config.yaml file. New OpenShift Container Platform updates might declare new capability handles for existing components, or introduce new components altogether. Users who customize their install-config.yaml file should consider periodically updating their install-config.yaml file as OpenShift Container Platform is updated. You can use the following configuration parameters to select cluster capabilities: capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage 1 Defines a baseline set of capabilities to install. Valid values are None , vCurrent and v4.x . If you select None , all optional capabilities will be disabled. The default value is vCurrent , which enables all optional capabilities. Note v4.x refers to any value up to and including the current cluster version. For example, valid values for a OpenShift Container Platform 4.12 cluster are v4.11 and v4.12 . 2 Defines a list of capabilities to explicitly enable. These will be enabled in addition to the capabilities specified in baselineCapabilitySet . Note In this example, the default capability is set to v4.11 . The additionalEnabledCapabilities field enables additional capabilities over the default v4.11 capability set. The following table describes the baselineCapabilitySet values. Table 3.1. Cluster capabilities baselineCapabilitySet values description Value Description vCurrent Specify this option when you want to automatically add new, default capabilities that are introduced in new releases. v4.11 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.11. By specifying v4.11 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.11 are baremetal , marketplace , and openshift-samples . v4.12 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.12. By specifying v4.12 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.12 are baremetal , marketplace , openshift-samples , Console , Insights , Storage and CSISnapshot . v4.13 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.13. By specifying v4.13 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.13 are baremetal , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot and NodeTuning . None Specify when the other sets are too large, and you do not need any capabilities or want to fine-tune via additionalEnabledCapabilities . Additional resources Installing a cluster on AWS with customizations Installing a cluster on GCP with customizations 3.2. Optional cluster capabilities in OpenShift Container Platform 4.13 Currently, cluster Operators provide the features for these optional capabilities. The following summarizes the features provided by each capability and what functionality you lose if it is disabled. Additional resources Cluster Operators reference 3.2.1. Bare-metal capability Purpose The Cluster Baremetal Operator provides the features for the baremetal capability. The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. The bare-metal capability is required for deployments using installer-provisioned infrastructure. Disabling the bare-metal capability can result in unexpected problems with these deployments. It is recommended that cluster administrators only disable the bare-metal capability during installations with user-provisioned infrastructure that do not have any BareMetalHost resources in the cluster. Important If the bare-metal capability is disabled, the cluster cannot provision or manage bare-metal nodes. Only disable the capability if there are no BareMetalHost resources in your deployment. Additional resources Deploying installer-provisioned clusters on bare metal Preparing for bare metal cluster installation Bare metal configuration 3.2.2. Cluster storage capability Purpose The Cluster Storage Operator provides the features for the Storage capability. The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Important If the cluster storage capability is disabled, the cluster will not have a default storageclass or any CSI drivers. Users with administrator privileges can create a default storageclass and manually install CSI drivers if the cluster storage capability is disabled. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. 3.2.3. Console capability Purpose The Console Operator provides the features for the Console capability. The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Additional resources Web console overview 3.2.4. CSI snapshot controller capability Purpose The Cluster CSI Snapshot Controller Operator provides the features for the CSISnapshot capability. The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Additional resources CSI volume snapshots 3.2.5. Insights capability Purpose The Insights Operator provides the features for the Insights capability. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Using Insights Operator 3.2.6. Marketplace capability Purpose The Marketplace Operator provides the features for the marketplace capability. The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. If you disable the marketplace capability, the Marketplace Operator does not create the openshift-marketplace namespace. Catalog sources can still be configured and managed on the cluster manually, but OLM depends on the openshift-marketplace namespace in order to make catalogs available to all namespaces on the cluster. Users with elevated permissions to create namespaces prefixed with openshift- , such as system or cluster administrators, can manually create the openshift-marketplace namespace. If you enable the marketplace capability, you can enable and disable individual catalogs by configuring the Marketplace Operator. Additional resources Red Hat-provided Operator catalogs 3.2.7. Node Tuning capability Purpose The Node Tuning Operator provides features for the NodeTuning capability. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. If you disable the NodeTuning capability, some default tuning settings will not be applied to the control-plane nodes. This might limit the scalability and performance of large clusters with over 900 nodes or 900 routes. Additional resources Using the Node Tuning Operator 3.2.8. OpenShift samples capability Purpose The Cluster Samples Operator provides the features for the openshift-samples capability. The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. If you disable the samples capability, users cannot access the image streams, samples, and templates it provides. Depending on your deployment, you might want to disable this component if you do not need it. Additional resources Configuring the Cluster Samples Operator 3.3. Additional resources Enabling cluster capabilities after installation
[ "capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installation_overview/cluster-capabilities
Chapter 3. Monitoring and logging with Azure Kubernetes Services (AKS) in Red Hat Developer Hub
Chapter 3. Monitoring and logging with Azure Kubernetes Services (AKS) in Red Hat Developer Hub Monitoring and logging are integral aspects of managing and maintaining Azure Kubernetes Services (AKS) in Red Hat Developer Hub. With features like Managed Prometheus Monitoring and Azure Monitor integration, administrators can efficiently monitor resource utilization, diagnose issues, and ensure the reliability of their containerized workloads. 3.1. Enabling Azure Monitor metrics To enable managed Prometheus monitoring, use the -enable-azure-monitor-metrics option within either the az aks create or az aks update command, depending on whether you're creating a new cluster or updating an existing one, such as: az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics The command installs the metrics add-on, which gathers Prometheus metrics . Using the command, you can enable monitoring of Azure resources through both native Azure Monitor metrics. You can also view the results in the portal under Monitoring Insights . For more information, see Monitor Azure resources with Azure Monitor . Furthermore, metrics from both the Managed Prometheus service and Azure Monitor can be accessed through Azure Managed Grafana service. For more information, see Link a Grafana workspace section. By default, Prometheus uses the minimum ingesting profile, which optimizes ingestion volume and sets default configurations for scrape frequency, targets, and metrics collected. The default settings can be customized through custom configuration. Azure offers various methods, including using different ConfigMaps, to provide scrape configuration and other metric add-on settings. For more information about default configuration, see Default Prometheus metrics configuration in Azure Monitor and Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus documentation. 3.2. Configuring annotations for monitoring You can configure the annotations for monitoring Red Hat Developer Hub specific metrics in both Helm deployment and Operator-backed deployment. Helm deployment To annotate the backstage pod for monitoring, update your values.yaml file as follows: upstream: backstage: # --- TRUNCATED --- podAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '9464' prometheus.io/scheme: 'http' Operator-backed deployment Procedure As an administrator of the operator, edit the default configuration to add Prometheus annotations as follows: # Update OPERATOR_NS accordingly OPERATOR_NS=rhdh-operator kubectl edit configmap backstage-default-config -n "USD{OPERATOR_NS}" Find the deployment.yaml key in the ConfigMap and add the annotations to the spec.template.metadata.annotations field as follows: deployment.yaml: |- apiVersion: apps/v1 kind: Deployment # --- truncated --- spec: template: # --- truncated --- metadata: labels: rhdh.redhat.com/app: # placeholder for 'backstage-<cr-name>' # --- truncated --- annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '9464' prometheus.io/scheme: 'http' # --- truncated --- Save your changes. Verification To verify if the scraping works, navigate to the corresponding Azure Monitor Workspace and view the metrics under Monitoring Metrics . 3.3. Viewing logs with Azure Kubernetes Services (AKS) You can access live data logs generated by Kubernetes objects and collect log data in Container Insights within AKS. Prerequisites You have deployed Developer Hub on AKS. For more information, see Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service . Procedure View live logs from your Developer Hub instance Navigate to the Azure Portal. Search for the resource group <your-ResourceGroup> and locate your AKS cluster <your-Cluster> . Select Kubernetes resources Workloads from the menu. Select the <your-rhdh-cr>-developer-hub (in case of Helm Chart installation) or <your-rhdh-cr>-backstage (in case of Operator-backed installation) deployment. Click Live Logs in the left menu. Select the pod. Note There must be only single pod. Live log data is collected and displayed. View real-time log data from the Container Engine Navigate to the Azure Portal. Search for the resource group <your-ResourceGroup> and locate your AKS cluster <your-Cluster> . Select Monitoring Insights from the menu. Go to the Containers tab. Find the backend-backstage container and click it to view real-time log data as it's generated by the Container Engine.
[ "az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics", "upstream: backstage: # --- TRUNCATED --- podAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '9464' prometheus.io/scheme: 'http'", "Update OPERATOR_NS accordingly OPERATOR_NS=rhdh-operator edit configmap backstage-default-config -n \"USD{OPERATOR_NS}\"", "deployment.yaml: |- apiVersion: apps/v1 kind: Deployment # --- truncated --- spec: template: # --- truncated --- metadata: labels: rhdh.redhat.com/app: # placeholder for 'backstage-<cr-name>' # --- truncated --- annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '9464' prometheus.io/scheme: 'http' # --- truncated ---" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/monitoring_and_logging/assembly-monitoring-and-logging-aks
23.2. Configuring Certificate Mapping Rules in Identity Management
23.2. Configuring Certificate Mapping Rules in Identity Management 23.2.1. Certificate Mapping Rules for Configuring Authentication on Smart Cards Certificate mapping rules are a convenient way of allowing users to authenticate using certificates in scenarios when the Identity Management (IdM) administrator does not have access to certain users' certificates. This lack of access is typically caused by the fact that the certificates have been issued by an external certificate authority. A special use case is represented by certificates issued by the Certificate System of an Active Directory (AD) with which the IdM domain is in a trust relationship. Certificate mapping rules are also convenient if the IdM environment is large with a lot of users using smart cards. In this situation, adding full certificates can be complicated. The subject and issuer are predictable in most scenarios and thus easier to add ahead of time than the full certificate. As a system administrator, you can create a certificate mapping rule and add certificate mapping data to a user entry even before a certificate is issued to a particular user. Once the certificate is issued, the user will be able to log in using the certificate even though the full certificate is not uploaded into his entry. In addition, as certificates have to be renewed at regular intervals, certificate mapping rules reduce administrative overhead. When a user's certificate gets renewed, the administrator does not have to update the user entry. For example, if the mapping is based on the Subject and Issuer values, and if the new certificate has the same subject and issuer as the old one, the mapping still applies. If, in contrast, the full certificate was used, then the administrator would have to upload the new certificate to the user entry to replace the old one. To set up certificate mapping: An administrator has to load the certificate mapping data (typically the issuer and subject) or the full certificate into a user account. An administrator has to create a certificate mapping rule to allow successful logging into IdM for a user: whose account contains a certificate mapping data entry whose certificate mapping data entry matches the information on the certificate For details on the individual components that make up a mapping rule and how to obtain and use them, see Components of an identity mapping rule in IdM and Obtaining the issuer from a certificate for use in a matching rule. 23.2.1.1. Certificate Mapping Rules for Trusts with Active Directory Domains This section outlines the different certificate mapping use cases that are possible if an IdM deployment is in a trust relationship with an Active Directory (AD) domain. Certificate mapping rules are a convenient way to enable access to IdM resources for users who have smart card certificates that were issued by the trusted AD Certificate System. Depending on the AD configuration, the following scenarios are possible: If the certificate is issued by AD but the user and the certificate are stored in IdM, the mapping and the whole processing of the authentication request takes place on the IdM side. For details of configuring this scenario, see Section 23.2.2, "Configuring Certificate Mapping for Users Stored in IdM" . If the user is stored in AD, the processing of the authentication request takes place in AD. There are three different subcases: The AD user entry contains the whole certificate. For details how to configure IdM in this scenario, see Section 23.2.3, "Configuring Certificate Mapping for Users Whose AD User Entry Contains the Whole Certificate" . AD is configured to map user certificates to user accounts. In this case, the AD user entry does not contain the whole certificate but instead contains an attribute called altSecurityIdentities . For details how to configure IdM in this scenario, see Section 23.2.4, "Configuring Certificate Mapping if AD is Configured to Map User Certificates to User Accounts" . The AD user entry contains neither the whole certificate nor the mapping data. In this case, the only solution is to use the ipa idoverrideuser-add command to add the whole certificate to the AD user's ID override in IdM. For details, see Section 23.2.5, "Configuring Certificate Mapping if the AD User Entry Contains no Certificate or Mapping Data" . 23.2.1.2. Components of an Identity Mapping Rule in IdM This section describes the components of an identity mapping rule in IdM and how to configure them. Each component has a default value that you can override. You can define the components in either the web UI or the command line. In the command line, the identity mapping rule is created using the ipa certmaprule-add command. Mapping Rule The mapping rule component associates (or maps) a certificate with one or more user accounts. The rule defines an LDAP search filter that associates a certificate with the intended user account. Certificates issued by different certificate authorities (CAs) might have different properties and might be used in different domains. Therefore, IdM does not apply mapping rules unconditionally, but only to the appropriate certificates. The appropriate certificates are defined using matching rules. Note that if you leave the mapping rule option empty, the certificates are searched in the userCertificate attribute as a DER encoded binary file. Define the mapping rule in the command line using the --maprule option. Matching Rule The domain list specifies the identity domains in which you want IdM to search the users when processing identity mapping rules. If you leave the option unspecified, IdM searches the users only in the local domain to which the IdM client belongs. Define the domain in the command line using the --domain option. Priority When multiple rules are applicable to a certificate, the rule with the highest priority takes precedence. All other rules are ignored. The lower the numerical value, the higher the priority of the identity mapping rule. For example, a rule with a priority 1 has higher priority than a rule with a priority 2. If a rule has no priority value defined, it has the lowest priority. Define the mapping rule priority in the command line using the --priority option. Example 23.1. Certificate Mapping Rule Example To define, using the command line, a certificate mapping rule called simple_rule that allows authentication for a certificate issued by the Smart Card CA of the EXAMPLE.ORG organisation as long as the Subject on that certificate matches a certmapdata entry in a user account in IdM: 23.2.1.3. Obtaining the Issuer from a Certificate for Use in a Matching Rule This procedure describes how to obtain the issuer information from a certificate so that you can copy and paste it into the matching rule of a certificate mapping rule. To get the issuer format required by a matching rule, use the openssl x509 command. Prerequisites You have the user certificate in a .pem or .crt format. Procedure Obtain the user information from the certificate. Use the openssl certificate display and signing utility with: the -noout option to prevent the output of an encoded version of the request the -issuer option to output the issuer name the -in option to specify the input file name to read the certificate from the -nameopt option with the RFC2253 value to display the output with the most specific relative distinguished name (RDN) first If the input file contains an Identity Management certificate, the output of the command shows that the Issuer is defined using the Organisation information: If the input file contains an Active Directory certificate, the output of the command shows that the Issuer is defined using the Domain Component information: Optionally, to create a new mapping rule in the command line based on a matching rule which specifies that the certificate issuer must be the extracted AD-WIN2012R2-CA of the ad.example.com domain and the subject on the certificate must match the certmapdata entry in a user account in IdM: Additional Information For details about the certmap command, including information about the supported formats for the matching rule and the mapping rule, and an explanation of the priority and domain fields, see the sss-certmap (5) man page. 23.2.2. Configuring Certificate Mapping for Users Stored in IdM This section describes the steps a system administrator must take to enable certificate mapping in IdM if the user for whom certificate authentication is being configured is stored in IdM. Prerequisites The user has an account in IdM. The administrator has either the whole certificate or the certificate mapping data to add to the user entry. 23.2.2.1. Adding a Certificate Mapping Rule in IdM This section describes how to set up a certificate mapping rule so that IdM users with certificates that match the conditions specified in the mapping rule and in their certificate mapping data entries can authenticate to IdM. 23.2.2.1.1. Adding a Certificate Mapping Rule in the IdM Web UI Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 23.1. Adding a New Certificate Mapping Rule in the IdM Web UI Enter the rule name. Enter the mapping rule. For example, to make IdM search for the Issuer and Subject entries in any certificate presented to them, and base its decision to authenticate or not on the information found in these two entries of the presented certificate, enter: Enter the matching rule. For example, to only allow certificates issued by the Smart Card CA of the EXAMPLE.ORG organization to authenticate users to IdM, enter: Figure 23.2. Entering the Details for a Certificate Mapping Rule in the IdM Web UI Click Add at the bottom of the dialog box to add the rule and close the box. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 23.2.2.1.2. Adding a Certificate Mapping Rule Using the Command Line Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make IdM search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate, recognizing only certificates issued by the Smart Card CA of the EXAMPLE.ORG organization: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 23.2.2.2. Adding Certificate Mapping Data to a User Entry in IdM This section describes how to enter certificate mapping data to an IdM user entry so that the user can authenticate using multiple certificates as long as they all contain the values specified in the certificate mapping data entry. 23.2.2.2.1. Adding Certificate Mapping Data to a User Entry in the IdM Web UI Log in to the IdM web UI as an administrator. Navigate to Users Active users and click the user entry. Find the Certificate mapping data option, and click Add . If you have the certificate of the user at your disposal: In the command-line interface, display the certificate using the cat utility or a text editor: Copy the certificate. In the IdM web UI, click Add to Certificate , and paste the certificate into the window that opens up. Figure 23.3. Adding a User's Certificate Mapping Data: Certificate Alternatively, if you do not have the certificate of the user at your disposal but know the Issuer and the Subject of the certificate, check the radio button of Issuer and subject and enter the values in the two respective boxes. Figure 23.4. Adding a User's Certificate Mapping Data: Issuer and Subject Click Add . Optionally, if you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of the user in the SSSD cache and force a reload of the user's information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to the user and that a corresponding mapping rule defined in Section 23.2.2.1, "Adding a Certificate Mapping Rule in IdM" exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as the user. 23.2.2.2.2. Adding Certificate Mapping Data to a User Entry Using the Command Line Obtain the administrator's credentials: If you have the certificate of the user at your disposal, add the certificate to the user account using the ipa user-add-cert command: Alternatively, if you do not have the certificate of the user at your disposal but know the Issuer and the Subject of the user's certificate: Optionally, if you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of the user in the SSSD cache and force a reload of the user's information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: 23.2.3. Configuring Certificate Mapping for Users Whose AD User Entry Contains the Whole Certificate This section describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains the whole certificate. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains a certificate. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. 23.2.3.1. Adding a Certificate Mapping Rule for Users Whose AD User Entry Contains the Whole Certificate Using the IdM Web UI To add a certificate mapping rule in the IdM web UI: Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 23.5. Adding a New Certificate Mapping Rule in the IdM Web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to what is available in AD: Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Figure 23.6. Certificate Mapping Rule for a User with a Certificate Stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 23.2.3.2. Adding a Certificate Mapping Rule for User Whose AD User Entry Contains the Whole Certificate Using the Command Line To add a certificate mapping rule using the command line: Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to what is available in AD, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 23.2.4. Configuring Certificate Mapping if AD is Configured to Map User Certificates to User Accounts This section describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains certificate mapping data. Prerequisite The user does not have an account in IdM. The user has an account in AD which contains the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. 23.2.4.1. Adding a Certificate Mapping Rule Using the Web UI if the Trusted AD Domain is Configured to Map User Certificates To add a certificate mapping rule if the trusted AD domain is configured to map user certificates: Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 23.7. Adding a New Certificate Mapping Rule in the IdM Web UI Enter the rule name. Enter the mapping rule. For example, to make AD DC search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate users to IdM: Enter the domain: Figure 23.8. Certificate Mapping Rule if AD is Configured for Mapping Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 23.2.4.2. Adding a Certificate Mapping Rule Using the Command Line if the Trusted AD Domain is Configured to Map User Certificates To add a certificate mapping rule using the command line: Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make AD search for the Issuer and Subject entries in any certificate presented, and only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 23.2.4.3. Checking Certificate Mapping Data on the AD Side The altSecurityIdentities attribute is the Active Directory (AD) equivalent of certmapdata user attribute in IdM. When configuring certificate mapping in IdM in the scenario when a trusted AD domain is configured to map user certificates to user accounts, the IdM system administrator needs to check that the altSecurityIdentities attribute is set correctly in the user entries in AD. To check that AD contains the right information for the user stored in AD, use the ldapsearch command. For example, to check with the adserver.ad.example.com server that the altSecurityIdentities attribute is set in the user entry of ad_user and that the matchrule stipulates that the certificate that ad_user uses to authenticate to AD was issued by AD-ROOT-CA of the ad.example.com domain and that the subject is <S<>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user : 23.2.5. Configuring Certificate Mapping if the AD User Entry Contains no Certificate or Mapping Data This section describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains neither the whole certificate nor certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains neither the whole certificate nor the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has the whole AD user certificate to add to the AD user's user ID override in IdM. 23.2.5.1. Adding a Certificate Mapping Rule Using the Web UI if the AD User Entry Contains no Certificate or Mapping Data To add a certificate mapping rule using the web UI if the AD user entry contains no certificate or mapping data: Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 23.9. Adding a New Certificate Mapping Rule in the IdM Web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM: Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Enter the domain name. For example, to search for users in the ad.example.com domain: Figure 23.10. Certificate Mapping Rule for a User with no Certificate or Mapping Data Stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 23.2.5.2. Adding a Certificate Mapping Rule Using the Command Line if the AD User Entry Contains no Certificate or Mapping Data To add a certificate mapping rule using the command line if the AD user entry contains no certificate or mapping data: Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 23.2.5.3. Adding a Certificate to an AD User's ID Override Using the Web UI To add a certificate to an AD user's ID override using the web UI if the user entry in AD contains no certificate or mapping data: Log in to the IdM web UI as an administrator. Navigate to Identity ID Views Default Trust View . Click Add . Figure 23.11. Adding a New User ID Override in the IdM Web UI In the User to override field, enter the user name in the following format: user_name @ domain_name Copy and paste the certificate of the user into the Certificate field. Figure 23.12. Configuring the User ID Override for an AD User Optionally, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of the user in the SSSD cache and force a reload of the user's information: Enter the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . 23.2.5.4. Adding a Certificate to an AD User's ID Override Using the Command Line To add a certificate to an AD user's ID override using the command line if the user entry in AD contains no certificate or mapping data: Obtain the administrator's credentials: Add the certificate of the user to the user account using the ipa idoverrideuser-add-cert command: Optionally, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of the user in the SSSD cache and force a reload of the user's information: Enter the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . 23.2.6. Combining Several Identity Mapping Rules Into One To combine several identity mapping rules into one combined rule, use the | (or) character to precede the individual mapping rules, and separate them using () brackets, for example: In the above example, the filter definition in the --maprule option includes these criteria: ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Section 23.2.2.1, "Adding a Certificate Mapping Rule in IdM" . altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Section 23.2.4, "Configuring Certificate Mapping if AD is Configured to Map User Certificates to User Accounts" . The addition of the --domain=ad.example.com option means that users mapped to a given certificate are not only searched in the local idm.example.com domain but also in the ad.example.com domain . The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. In the above example, the filter definition in the --maprule option includes these criteria: userCertificate;binary={cert!bin} is a filter that returns user entries that include the whole certificate. For AD users, creating this type of filter is described in detail in Section 23.2.5, "Configuring Certificate Mapping if the AD User Entry Contains no Certificate or Mapping Data" . ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Section 23.2.2.1, "Adding a Certificate Mapping Rule in IdM" . altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Section 23.2.4, "Configuring Certificate Mapping if AD is Configured to Map User Certificates to User Accounts" . The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria.
[ "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'", "openssl x509 -noout -issuer -in idm_user.crt -nameopt RFC2253 issuer=CN=Certificate Authority,O=REALM.EXAMPLE.COM", "# openssl x509 -noout -issuer -in ad_user.crt -nameopt RFC2253 issuer=CN=AD-WIN2012R2-CA,DC=AD,DC=EXAMPLE,DC=COM", "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN= AD-WIN2012R2-CA,DC=AD,DC=EXAMPLE,DC=COM ' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'", "(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})", "<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add rule_name --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})' ------------------------------------------------------- Added Certificate Identity Mapping Rule \"rule_name\" ------------------------------------------------------- Rule name: rule_name Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500}) Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG Enabled: TRUE", "systemctl restart sssd", "[root@server ~]# cat idm_user_certificate.pem -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgIBEjANBgkqhkiG9w0BAQsFADA6MRgwFgYDVQQKDA9JRE0u RVhBTVBMRS5DT00xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0x ODA5MDIxODE1MzlaFw0yMDA5MDIxODE1MzlaMCwxGDAWBgNVBAoMD0lETS5FWEFN [...output truncated...]", "sss_cache -u user_name", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=`cat idm_user_cert.pem | tail -n +2 | head -n -1 | tr -d '\\r\\n'\\` ipa user-add-certmapdata idm_user --certificate USDCERT", "ipa user-add-certmapdata idm_user --subject \" O=EXAMPLE.ORG,CN=test \" --issuer \" CN=Smart Card CA,O=EXAMPLE.ORG \" -------------------------------------------- Added certificate mappings to user \" idm_user \" -------------------------------------------- User login: idm_user Certificate mapping data: X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG", "sss_cache -u user_name", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "ad.example.com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add ad_configured_for_mapping_rule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})' --domain=ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"ad_configured_for_mapping_rule\" ------------------------------------------------------- Rule name: ad_configured_for_mapping_rule Mapping rule: (altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "ldapsearch -o ldif-wrap=no -LLL -h adserver.ad.example.com -p 389 -D cn=Administrator,cn=users,dc=ad,dc=example,dc=com -W -b cn=users,dc=ad,dc=example,dc=com \"(cn=ad_user)\" altSecurityIdentities Enter LDAP Password: dn: CN=ad_user,CN=Users,DC=ad,DC=example,DC=com altSecurityIdentities: X509:<I>DC=com,DC=example,DC=ad,CN=AD-ROOT-CA<S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=`cat ad_user_cert.pem | tail -n +2 | head -n -1 | tr -d '\\r\\n'\\` ipa idoverrideuser-add-cert [email protected] --certificate USDCERT", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "ipa certmaprule-add ad_cert_for_ipa_and_ad_users \\ --maprule='(|(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' \\ --matchrule='<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' \\ --domain=ad.example.com", "ipa certmaprule-add ipa_cert_for_ad_users --maprule='(|(userCertificate;binary={cert!bin})(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=Certificate Authority,O=REALM.EXAMPLE.COM' --domain=idm.example.com --domain=ad.example.com" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-certificate-mapping-rules-in-identity-management
5.5. Nautilus
5.5. Nautilus The nautilus-open-terminal package provides a right-click Open Terminal option to open a new terminal window in the current directory. Previously, when this option was chosen from the Desktop , the new terminal window location defaulted to the user's home directory. However, in Red Hat Enterprise Linux 6, the default behavior opens the Desktop directory (i.e ~/Desktop/ ). To enable the behavior, use the following command to set the desktop_opens_home_dir GConf Boolean to true:
[ "gconftool-2 -s /apps/nautilus-open-terminal/desktop_opens_dir --type=bool true" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-command_line_tools-nautilus
Chapter 9. Executing the deployment playbook
Chapter 9. Executing the deployment playbook Change into the hc-ansible-deployment directory on the first node: Run the following command as the root user to start the deployment process: Enter the vault password when prompted to start deployment. Important If you are using Red Hat Virtualization Host (RHVH) 4.4 SP1 based on Red Hat Enterprise Linux 8.6 (RHEL 8.6), add the -e 'ansible_python_interpreter=/usr/bin/python3.6' parameter:
[ "cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment", "ansible-playbook -i gluster_inventory.yml hc_deployment.yml --extra-vars='@he_gluster_vars.json' --ask-vault-pass", "ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3.6' -i gluster_inventory.yml hc_deployment.yml --extra-vars='@he_gluster_vars.json' --ask-vault-pass" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/executing-deployment-playbook
Chapter 6. Installing the Migration Toolkit for Containers
Chapter 6. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4. After you install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 by using the Operator Lifecycle Manager, you manually install the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 6.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.8. MTC 1.8 only supports migrations from OpenShift Container Platform 4.9 and later. Table 6.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.8 OpenShift Container Platform 4.9 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 6.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi8 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 6.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 6.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.11, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 6.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 6.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 6.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 6.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 6.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 6.4.2.1. NetworkPolicy configuration 6.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 6.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 6.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 6.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 6.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 6.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 6.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 6.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The following storage providers are supported: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 6.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 6.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 6.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 6.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 6.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \ AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" --query 'password' -o tsv` \ AZURE_CLIENT_ID=`az ad sp list --display-name "velero" \ --query '[0].appId' -o tsv` Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 6.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 6.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi8 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/installing-3-4
Chapter 2. Red Hat Ceph Storage considerations and recommendations
Chapter 2. Red Hat Ceph Storage considerations and recommendations As a storage administrator, you can have a basic understanding about what things to consider before running a Red Hat Ceph Storage cluster. Understanding such things as, the hardware and network requirements, understanding what type of workloads work well with a Red Hat Ceph Storage cluster, along with Red Hat's recommendations. Red Hat Ceph Storage can be used for different workloads based on a particular business need or set of requirements. Doing the necessary planning before installing a Red Hat Ceph Storage is critical to the success of running a Ceph storage cluster efficiently and achieving the business requirements. Note Want help with planning a Red Hat Ceph Storage cluster for a specific use case? Contact your Red Hat representative for assistance. 2.1. Basic Red Hat Ceph Storage considerations The first consideration for using Red Hat Ceph Storage is developing a storage strategy for the data. A storage strategy is a method of storing data that serves a particular use case. If you need to store volumes and images for a cloud platform like OpenStack, you can choose to store data on faster Serial Attached SCSI (SAS) drives with Solid State Drives (SSD) for journals. By contrast, if you need to store object data for an S3- or Swift-compliant gateway, you can choose to use something more economical, like traditional Serial Advanced Technology Attachment (SATA) drives. Red Hat Ceph Storage can accommodate both scenarios in the same storage cluster, but you need a means of providing the fast storage strategy to the cloud platform, and a means of providing more traditional storage for your object store. One of the most important steps in a successful Ceph deployment is identifying a price-to-performance profile suitable for the storage cluster's use case and workload. It is important to choose the right hardware for the use case. For example, choosing IOPS-optimized hardware for a cold storage application increases hardware costs unnecessarily. Whereas, choosing capacity-optimized hardware for its more attractive price point in an IOPS-intensive workload will likely lead to unhappy users complaining about slow performance. Red Hat Ceph Storage can support multiple storage strategies. Use cases, cost versus benefit performance tradeoffs, and data durability are the primary considerations that help develop a sound storage strategy. Use Cases Ceph provides massive storage capacity, and it supports numerous use cases, such as: The Ceph Block Device client is a leading storage backend for cloud platforms that provides limitless storage for volumes and images with high performance features like copy-on-write cloning. The Ceph Object Gateway client is a leading storage backend for cloud platforms that provides a RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap, video, and other data. The Ceph File System for traditional file storage. Cost vs. Benefit of Performance Faster is better. Bigger is better. High durability is better. However, there is a price for each superlative quality, and a corresponding cost versus benefit tradeoff. Consider the following use cases from a performance perspective: SSDs can provide very fast storage for relatively small amounts of data and journaling. Storing a database or object index can benefit from a pool of very fast SSDs, but proves too expensive for other data. SAS drives with SSD journaling provide fast performance at an economical price for volumes and images. SATA drives without SSD journaling provide cheap storage with lower overall performance. When you create a CRUSH hierarchy of OSDs, you need to consider the use case and an acceptable cost versus performance tradeoff. Data Durability In large scale storage clusters, hardware failure is an expectation, not an exception. However, data loss and service interruption remain unacceptable. For this reason, data durability is very important. Ceph addresses data durability with multiple replica copies of an object or with erasure coding and multiple coding chunks. Multiple copies or multiple coding chunks present an additional cost versus benefit tradeoff: it is cheaper to store fewer copies or coding chunks, but it can lead to the inability to service write requests in a degraded state. Generally, one object with two additional copies, or two coding chunks can allow a storage cluster to service writes in a degraded state while the storage cluster recovers. Replication stores one or more redundant copies of the data across failure domains in case of a hardware failure. However, redundant copies of data can become expensive at scale. For example, to store 1 petabyte of data with triple replication would require a cluster with at least 3 petabytes of storage capacity. Erasure coding stores data as data chunks and coding chunks. In the event of a lost data chunk, erasure coding can recover the lost data chunk with the remaining data chunks and coding chunks. Erasure coding is substantially more economical than replication. For example, using erasure coding with 8 data chunks and 3 coding chunks provides the same redundancy as 3 copies of the data. However, such an encoding scheme uses approximately 1.5x the initial data stored compared to 3x with replication. The CRUSH algorithm aids this process by ensuring that Ceph stores additional copies or coding chunks in different locations within the storage cluster. This ensures that the failure of a single storage device or host does not lead to a loss of all of the copies or coding chunks necessary to preclude data loss. You can plan a storage strategy with cost versus benefit tradeoffs, and data durability in mind, then present it to a Ceph client as a storage pool. Important ONLY the data storage pool can use erasure coding. Pools storing service data and bucket indexes use replication. Important Ceph's object copies or coding chunks make RAID solutions obsolete. Do not use RAID, because Ceph already handles data durability, a degraded RAID has a negative impact on performance, and recovering data using RAID is substantially slower than using deep copies or erasure coding chunks. Additional Resources See the Minimum hardware considerations for Red Hat Ceph Storage section of the Red Hat Ceph Storage Installation Guide for more details. 2.2. Red Hat Ceph Storage workload considerations One of the key benefits of a Ceph storage cluster is the ability to support different types of workloads within the same storage cluster using performance domains. Different hardware configurations can be associated with each performance domain. Storage administrators can deploy storage pools on the appropriate performance domain, providing applications with storage tailored to specific performance and cost profiles. Selecting appropriately sized and optimized servers for these performance domains is an essential aspect of designing a Red Hat Ceph Storage cluster. To the Ceph client interface that reads and writes data, a Ceph storage cluster appears as a simple pool where the client stores data. However, the storage cluster performs many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph object storage daemons, referred to as Ceph OSDs, or simply OSDs, both use the Controlled Replication Under Scalable Hashing (CRUSH) algorithm for the storage and retrieval of objects. Ceph OSDs can run in containers within the storage cluster. A CRUSH map describes a topography of cluster resources, and the map exists both on client hosts as well as Ceph Monitor hosts within the cluster. Ceph clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph clients communicate directly with OSDs, eliminating a centralized object lookup and a potential performance bottleneck. With awareness of the CRUSH map and communication with their peers, OSDs can handle replication, backfilling, and recovery-allowing for dynamic failure recovery. Ceph uses the CRUSH map to implement failure domains. Ceph also uses the CRUSH map to implement performance domains, which simply take the performance profile of the underlying hardware into consideration. The CRUSH map describes how Ceph stores data, and it is implemented as a simple hierarchy, specifically an acyclic graph, and a ruleset. The CRUSH map can support multiple hierarchies to separate one type of hardware performance profile from another. Ceph implements performance domains with device "classes". For example, you can have these performance domains coexisting in the same Red Hat Ceph Storage cluster: Hard disk drives (HDDs) are typically appropriate for cost and capacity-focused workloads. Throughput-sensitive workloads typically use HDDs with Ceph write journals on solid state drives (SSDs). IOPS-intensive workloads, such as MySQL and MariaDB, often use SSDs. Figure 2.1. Performance and Failure Domains Workloads Red Hat Ceph Storage is optimized for three primary workloads. Important Carefully consider the workload being run by Red Hat Ceph Storage clusters BEFORE considering what hardware to purchase, because it can significantly impact the price and performance of the storage cluster. For example, if the workload is capacity-optimized and the hardware is better suited to a throughput-optimized workload, then hardware will be more expensive than necessary. Conversely, if the workload is throughput-optimized and the hardware is better suited to a capacity-optimized workload, then the storage cluster can suffer from poor performance. IOPS optimized: Input, output per second (IOPS) optimization deployments are suitable for cloud computing operations, such as running MYSQL or MariaDB instances as virtual machines on OpenStack. IOPS optimized deployments require higher performance storage such as 15k RPM SAS drives and separate SSD journals to handle frequent write operations. Some high IOPS scenarios use all flash storage to improve IOPS and total throughput. An IOPS-optimized storage cluster has the following properties: Lowest cost per IOPS. Highest IOPS per GB. 99th percentile latency consistency. Uses for an IOPS-optimized storage cluster are: Typically block storage. 3x replication for hard disk drives (HDDs) or 2x replication for solid state drives (SSDs). MySQL on OpenStack clouds. Throughput optimized: Throughput-optimized deployments are suitable for serving up significant amounts of data, such as graphic, audio, and video content. Throughput-optimized deployments require high bandwidth networking hardware, controllers, and hard disk drives with fast sequential read and write characteristics. If fast data access is a requirement, then use a throughput-optimized storage strategy. Also, if fast write performance is a requirement, using Solid State Disks (SSD) for journals will substantially improve write performance. A throughput-optimized storage cluster has the following properties: Lowest cost per MBps (throughput). Highest MBps per TB. Highest MBps per BTU. Highest MBps per Watt. 97th percentile latency consistency. Uses for a throughput-optimized storage cluster are: Block or object storage. 3x replication. Active performance storage for video, audio, and images. Streaming media, such as 4k video. Capacity optimized: Capacity-optimized deployments are suitable for storing significant amounts of data as inexpensively as possible. Capacity-optimized deployments typically trade performance for a more attractive price point. For example, capacity-optimized deployments often use slower and less expensive SATA drives and co-locate journals rather than using SSDs for journaling. A cost and capacity-optimized storage cluster has the following properties: Lowest cost per TB. Lowest BTU per TB. Lowest Watts required per TB. Uses for a cost and capacity-optimized storage cluster are: Typically object storage. Erasure coding for maximizing usable capacity Object archive. Video, audio, and image object repositories. 2.3. Network considerations for Red Hat Ceph Storage An important aspect of a cloud storage solution is that storage clusters can run out of IOPS due to network latency, and other factors. Also, the storage cluster can run out of throughput due to bandwidth constraints long before the storage clusters run out of storage capacity. This means that the network hardware configuration must support the chosen workloads to meet price versus performance requirements. Storage administrators prefer that a storage cluster recovers as quickly as possible. Carefully consider bandwidth requirements for the storage cluster network, be mindful of network link oversubscription, and segregate the intra-cluster traffic from the client-to-cluster traffic. Also consider that network performance is increasingly important when considering the use of Solid State Disks (SSD), flash, NVMe, and other high performing storage devices. Ceph supports a public network and a storage cluster network. The public network handles client traffic and communication with Ceph Monitors. The storage cluster network handles Ceph OSD heartbeats, replication, backfilling, and recovery traffic. At a minimum , a single 10 Gb/s Ethernet link should be used for storage hardware, and you can add additional 10 Gb/s Ethernet links for connectivity and throughput. Important Red Hat recommends allocating bandwidth to the storage cluster network, such that it is a multiple of the public network using the osd_pool_default_size as the basis for the multiple on replicated pools. Red Hat also recommends running the public and storage cluster networks on separate network cards. Important Red Hat recommends using 10 Gb/s Ethernet for Red Hat Ceph Storage deployments in production. A 1 Gb/s Ethernet network is not suitable for production storage clusters. In the case of a drive failure, replicating 1 TB of data across a 1 Gb/s network takes 3 hours and replicating 10 TB across a 1 Gb/s network takes 30 hours. Using 10 TB is the typical drive configuration. By contrast, with a 10 Gb/s Ethernet network, the replication times would be 20 minutes for 1 TB and 1 hour for 10 TB. Remember that when a Ceph OSD fails, the storage cluster will recover by replicating the data it contained to other Ceph OSDs within the pool. The failure of a larger domain such as a rack means that the storage cluster utilizes considerably more bandwidth. When building a storage cluster consisting of multiple racks, which is common for large storage implementations, consider utilizing as much network bandwidth between switches in a "fat tree" design for optimal performance. A typical 10 Gb/s Ethernet switch has 48 10 Gb/s ports and four 40 Gb/s ports. Use the 40 Gb/s ports on the spine for maximum throughput. Alternatively, consider aggregating unused 10 Gb/s ports with QSFP+ and SFP+ cables into more 40 Gb/s ports to connect to other rack and spine routers. Also, consider using LACP mode 4 to bond network interfaces. Additionally, use jumbo frames, with a maximum transmission unit (MTU) of 9000, especially on the backend or cluster network. Before installing and testing a Red Hat Ceph Storage cluster, verify the network throughput. Most performance-related problems in Ceph usually begin with a networking issue. Simple network issues like a kinked or bent Cat-6 cable could result in degraded bandwidth. Use a minimum of 10 Gb/s ethernet for the front side network. For large clusters, consider using 40 Gb/s ethernet for the backend or cluster network. Important For network optimization, Red Hat recommends using jumbo frames for a better CPU per bandwidth ratio, and a non-blocking network switch back-plane. Red Hat Ceph Storage requires the same MTU value throughout all networking devices in the communication path, end-to-end for both public and cluster networks. Verify that the MTU value is the same on all hosts and networking equipment in the environment before using a Red Hat Ceph Storage cluster in production. Additional Resources See the Configuring a private network section in the Red Hat Ceph Storage Configuration Guide for more details. See the Configuring a public network section in the Red Hat Ceph Storage Configuration Guide for more details. See the Configuring multiple public networks to the cluster section in the Red Hat Ceph Storage Configuration Guide for more details. 2.4. Considerations for using a RAID controller with OSD hosts Optionally, you can consider using a RAID controller on the OSD hosts. Here are some things to consider: If an OSD host has a RAID controller with 1-2 Gb of cache installed, enabling the write-back cache might result in increased small I/O write throughput. However, the cache must be non-volatile. Most modern RAID controllers have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power-loss event. It is important to understand how a particular controller and its firmware behave after power is restored. Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers and some firmware do not provide such information. Verify that disk level caches are disabled to avoid file system corruption. Create a single RAID 0 volume with write-back for each Ceph OSD data drive with write-back cache enabled. If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the RAID controller, then investigate whether the controller and firmware support pass-through mode. Enabling pass-through mode helps avoid caching logic, and generally results in much lower latency for fast media. 2.5. Tuning considerations for the Linux kernel when running Ceph Production Red Hat Ceph Storage clusters generally benefit from tuning the operating system, specifically around limits and memory allocation. Ensure that adjustments are set for all hosts within the storage cluster. You can also open a case with Red Hat support asking for additional guidance. Increase the File Descriptors The Ceph Object Gateway can hang if it runs out of file descriptors. You can modify the /etc/security/limits.conf file on Ceph Object Gateway hosts to increase the file descriptors for the Ceph Object Gateway. Adjusting the ulimit value for Large Storage Clusters When running Ceph administrative commands on large storage clusters, for example, with 1024 Ceph OSDs or more, create an /etc/security/limits.d/50-ceph.conf file on each host that runs administrative commands with the following contents: Replace USER_NAME with the name of the non-root user account that runs the Ceph administrative commands. Note The root user's ulimit value is already set to unlimited by default on Red Hat Enterprise Linux. 2.6. How colocation works and its advantages You can colocate containerized Ceph daemons on the same host. Here are the advantages of colocating some of Ceph's services: Significant improvement in total cost of ownership (TCO) at small scale Reduction from six hosts to three for the minimum configuration Easier upgrade Better resource isolation How Colocation Works With the help of the Cephadm orchestrator, you can colocate one daemon from the following list with one or more OSD daemons (ceph-osd): Ceph Monitor ( ceph-mon ) and Ceph Manager ( ceph-mgr ) daemons NFS Ganesha ( nfs-ganesha ) for Ceph Object Gateway (nfs-ganesha) RBD Mirror ( rbd-mirror ) Observability Stack (Grafana) Additionally, for Ceph Object Gateway ( radosgw ) (RGW) and Ceph File System ( ceph-mds ), you can colocate either with an OSD daemon plus a daemon from the above list, excluding RBD mirror. Note Collocating two of the same kind of daemons on a given node is not supported. Note Because ceph-mon and ceph-mgr work together closely they do not count as two separate daemons for the purposes of colocation. Note Red Hat recommends colocating the Ceph Object Gateway with Ceph OSD containers to increase performance. With the colocation rules shared above, we have the following minimum clusters sizes that comply with these rules: Example 1 Media: Full flash systems (SSDs) Use case: Block (RBD) and File (CephFS), or Object (Ceph Object Gateway) Number of nodes: 3 Replication scheme: 2 Host Daemon Daemon Daemon host1 OSD Monitor/Manager Grafana host2 OSD Monitor/Manager RGW or CephFS host3 OSD Monitor/Manager RGW or CephFS Note The minimum size for a storage cluster with three replicas is four nodes. Similarly, the size of a storage cluster with two replicas is a three node cluster. It is a requirement to have a certain number of nodes for the replication factor with an extra node in the cluster to avoid extended periods with the cluster in a degraded state. Figure 2.2. Colocated Daemons Example 1 Example 2 Media: Full flash systems (SSDs) or spinning devices (HDDs) Use case: Block (RBD), File (CephFS), and Object (Ceph Object Gateway) Number of nodes: 4 Replication scheme: 3 Host Daemon Daemon Daemon host1 OSD Grafana CephFS host2 OSD Monitor/Manager RGW host3 OSD Monitor/Manager RGW host4 OSD Monitor/Manager CephFS Figure 2.3. Colocated Daemons Example 2 Example 3 Media: Full flash systems (SSDs) or spinning devices (HDDs) Use case: Block (RBD), Object (Ceph Object Gateway), and NFS for Ceph Object Gateway Number of nodes: 4 Replication scheme: 3 Host Daemon Daemon Daemon host1 OSD Grafana host2 OSD Monitor/Manager RGW host3 OSD Monitor/Manager RGW host4 OSD Monitor/Manager NFS (RGW) Figure 2.4. Colocated Daemons Example 3 The diagrams below shows the differences between storage clusters with colocated and non-colocated daemons. Figure 2.5. Colocated Daemons Figure 2.6. Non-colocated Daemons 2.7. Operating system requirements for Red Hat Ceph Storage Red Hat Enterprise Linux entitlements are included in the Red Hat Ceph Storage subscription. The release of Red Hat Ceph Storage 7 is supported on Red Hat Enterprise Linux 9.2. Red Hat Ceph Storage 7 is supported on container-based deployments only. Use the same architecture and deployment type across all nodes. For example, do not use a mixture of nodes with both AMD64 and Intel 64 architectures, or a mixture of nodes with container-based deployments. Important Red Hat does not support clusters with heterogeneous architectures or deployment types. SELinux By default, SELinux is set to Enforcing mode and the ceph-selinux packages are installed. For additional information on SELinux, see the Data Security and Hardening Guide , and Red Hat Enterprise Linux 9 Using SELinux Guide . Additional Resources Red Hat Enterprise Linux 2.8. Minimum hardware considerations for Red Hat Ceph Storage Red Hat Ceph Storage can run on non-proprietary commodity hardware. Small production clusters and development clusters can run without performance optimization with modest hardware. Note Disk space requirements are based on the Ceph daemons' default path under /var/lib/ceph/ directory. Table 2.1. Containers Process Criteria Minimum Recommended ceph-osd-container Processor 1x AMD64 or Intel 64 CPU CORE per OSD container. RAM Minimum of 5 GB of RAM per OSD container. Number of nodes Minimum of 3 nodes required. OS Disk 1x OS disk per host. OSD Storage 1x storage drive per OSD container. Cannot be shared with OS Disk. block.db Optional, but Red Hat recommended, 1x SSD or NVMe or Optane partition or lvm per daemon. Sizing is 4% of block.data for BlueStore for object, file and mixed workloads and 1% of block.data for the BlueStore for Block Device, Openstack cinder, and Openstack cinder workloads. block.wal Optionally, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it's faster than the block.db device. Network 2x 10 GB Ethernet NICs ceph-mon-container Processor 1x AMD64 or Intel 64 CPU CORE per mon-container RAM 3 GB per mon-container Disk Space 10 GB per mon-container, 50 GB Recommended Monitor Disk Optionally, 1x SSD disk for Monitor rocksdb data Network 2x 1 GB Ethernet NICs, 10 GB Recommended Prometheus 20 GB to 50 GB under /var/lib/ceph/ directory created as a separate file system to protect the contents under /var/ directory. ceph-mgr-container Processor 1x AMD64 or Intel 64 CPU CORE per mgr-container RAM 3 GB per mgr-container Network 2x 1 GB Ethernet NICs, 10 GB Recommended ceph-radosgw-container Processor 1x AMD64 or Intel 64 CPU CORE per radosgw-container RAM 1 GB per daemon Disk Space 5 GB per daemon Network 1x 1 GB Ethernet NICs ceph-mds-container Processor 1x AMD64 or Intel 64 CPU CORE per mds-container RAM 3 GB per mds-container This number is highly dependent on the configurable MDS cache size. The RAM requirement is typically twice as much as the amount set in the mds_cache_memory_limit configuration setting. Note also that this is the memory for your daemon, not the overall system memory. Partition Space As a best practice, create a dedicated partition for /var/log with a minimum of 20 GB of free space for this service. If a dedicated partition is not possible, ensure that /var is on a dedicated partition with at least the above-mentioned free space.
[ "ceph soft nofile unlimited", "USER_NAME soft nproc unlimited" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/installation_guide/red-hat-ceph-storage-considerations-and-recommendations
A.3. Fsync
A.3. Fsync Fsync is known as an I/O expensive operation, but this is is not completely true. Firefox used to call the sqlite library each time the user clicked on a link to go to a new page. Sqlite called fsync and because of the file system settings (mainly ext3 with data-ordered mode), there was a long latency when nothing happened. This could take a long time (up to 30 seconds) if another process was copying a large file at the same time. However, in other cases, where fsync was not used at all, problems emerged with the switch to the ext4 file system. Ext3 was set to data-ordered mode, which flushed memory every few seconds and saved it to a disk. But with ext4 and laptop_mode, the interval between saves was longer and data might get lost when the system was unexpectedly switched off. Now ext4 is patched, but we must still consider the design of our applications carefully, and use fsync as appropriate. The following simple example of reading and writing into a configuration file shows how a backup of a file can be made or how data can be lost: /* open and read configuration file e.g. ./myconfig */ fd = open("./myconfig", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); ... fd = open("./myconfig", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR); write(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); A better approach would be: /* open and read configuration file e.g. ./myconfig */ fd = open("./myconfig", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); ... fd = open("./myconfig.suffix", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR write(fd, myconfig_buf, sizeof(myconfig_buf)); fsync(fd); /* paranoia - optional */ ... close(fd); rename("./myconfig", "./myconfig~"); /* paranoia - optional */ rename("./myconfig.suffix", "./myconfig");
[ "/* open and read configuration file e.g. ./myconfig */ fd = open(\"./myconfig\", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); fd = open(\"./myconfig\", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR); write(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd);", "/* open and read configuration file e.g. ./myconfig */ fd = open(\"./myconfig\", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); fd = open(\"./myconfig.suffix\", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR write(fd, myconfig_buf, sizeof(myconfig_buf)); fsync(fd); /* paranoia - optional */ close(fd); rename(\"./myconfig\", \"./myconfig~\"); /* paranoia - optional */ rename(\"./myconfig.suffix\", \"./myconfig\");" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/developer_tips-fsync
Chapter 6. Using the management API
Chapter 6. Using the management API AMQ Broker has an extensive management API, which you can use to modify a broker's configuration, create new resources (for example, addresses and queues), inspect these resources (for example, how many messages are currently held in a queue), and interact with them (for example, to remove messages from a queue). In addition, clients can use the management API to manage the broker and subscribe to management notifications. 6.1. Methods for managing AMQ Broker using the management API There are two ways to use the management API to manage the broker: Using JMX - JMX is the standard way to manage Java applications Using the JMS API - management operations are sent to the broker using JMS messages and the AMQ JMS client Although there are two different ways to manage the broker, each API supports the same functionality. If it is possible to manage a resource using JMX it is also possible to achieve the same result by using JMS messages and the AMQ JMS client. This choice depends on your particular requirements, application settings, and environment. Regardless of the way you invoke management operations, the management API is the same. For each managed resource, there exists a Java interface describing what can be invoked for this type of resource. The broker exposes its managed resources in the org.apache.activemq.artemis.api.core.management package. The way to invoke management operations depends on whether JMX messages or JMS messages and the AMQ JMS client are used. Note Some management operations require a filter parameter to choose which messages are affected by the operation. Passing null or an empty string means that the management operation will be performed on all messages . 6.2. Managing AMQ Broker using JMX You can use Java Management Extensions (JMX) to manage a broker. The management API is exposed by the broker using MBeans interfaces. The broker registers its resources with the domain org.apache.activemq . For example, the ObjectName to manage a queue named exampleQueue is: org.apache.activemq.artemis:broker="__BROKER_NAME__",component=addresses,address="exampleQueue",subcomponent=queues,routingtype="anycast",queue="exampleQueue" The MBean is: org.apache.activemq.artemis.api.management.QueueControl The MBean's ObjectName is built using the helper class org.apache.activemq.artemis.api.core.management.ObjectNameBuilder . You can also use jconsole to find the ObjectName of the MBeans you want to manage. Managing the broker using JMX is identical to management of any Java applications using JMX. It can be done by reflection or by creating proxies of the MBeans. 6.2.1. Configuring JMX management By default, JMX is enabled to manage the broker. You can enable or disable JMX management by setting the jmx-management-enabled property in the broker.xml configuration file. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Set <jmx-management-enabled> . <jmx-management-enabled>true</jmx-management-enabled> If JMX is enabled, the broker can be managed locally using jconsole . Note Remote connections to JMX are not enabled by default for security reasons. If you want to manage multiple brokers from the same MBeanServer , configure the JMX domain for each of the brokers. By default, the broker uses the JMX domain org.apache.activemq.artemis . <jmx-domain>my.org.apache.activemq</jmx-domain> Note If you are using AMQ Broker on a Windows system, system properties must be set in artemis , or artemis.cmd . A shell script is located under <install_dir> /bin . Additional resources For more information on configuring the broker for remote management, see Oracle's Java Management Guide . 6.2.2. Configuring JMX management access By default, remote JMX access to a broker is disabled for security reasons. However, AMQ Broker has a JMX agent that allows remote access to JMX MBeans. You enable JMX access by configuring a connector element in the broker management.xml configuration file. Note While it is also possible to enable JMX access using the `com.sun.management.jmxremote ` JVM system property, that method is not supported and is not secure. Modifying that JVM system property can bypass RBAC on the broker. To minimize security risks, consider limited access to localhost. Important Exposing the JMX agent of a broker for remote management has security implications. To secure your configuration as described in this procedure: Use SSL for all connections. Explicitly define the connector host, that is, the host and port to expose the agent on. Explicitly define the port that the RMI (Remote Method Invocation) registry binds to. Prerequisites A working broker instance The Java jconsole utility Procedure Open the <broker-instance-dir> /etc/management.xml configuration file. Define a connector for the JMX agent. The connector-port setting establishes an RMI registry that clients such as jconsole query for the JMX connector server. For example, to allow remote access on port 1099: <connector connector-port="1099"/> Verify the connection to the JMX agent using jconsole : Define additional properties on the connector, as described below. connector-host The broker server host to expose the agent on. To prevent remote access, set connector-host to 127.0.0.1 (localhost). rmi-registry-port The port that the JMX RMI connector server binds to. If not set, the port is always random. Set this property to avoid problems with remote JMX connections tunnelled through a firewall. jmx-realm JMX realm to use for authentication. The default value is activemq to match the JAAS configuration. object-name Object name to expose the remote connector on. The default value is connector:name=rmi . secured Specify whether the connector is secured using SSL. The default value is false . Set the value to true to ensure secure communication. key-store-path Location of the keystore. Required if you have set secured="true" . key-store-password Keystore password. Required if you have set secured="true" . The password can be encrypted. key-store-provider Keystore provider. Required if you have set secured="true" . The default value is JKS . trust-store-path Location of the truststore. Required if you have set secured="true" . trust-store-password Truststore password. Required if you have set secured="true" . The password can be encrypted. trust-store-provider Truststore provider. Required if you have set secured="true" . The default value is JKS password-codec The fully qualified class name of the password codec to use. See the password masking documentation, linked below, for more details on how this works. Set an appropriate value for the endpoint serialization using jdk.serialFilter as described in the Java Platform documentation . Additional resources For more information about encrypted passwords in configuration files, see Encrypting Passwords in Configuration Files . 6.2.3. MBeanServer configuration When the broker runs in standalone mode, it uses the Java Virtual Machine's Platform MBeanServer to register its MBeans. By default, Jolokia is also deployed to allow access to the MBean server using REST. 6.2.4. How JMX is exposed with Jolokia By default, AMQ Broker ships with the Jolokia HTTP agent deployed as a web application. Jolokia is a remote JMX over HTTP bridge that exposes MBeans. Note To use Jolokia, the user must belong to the role defined by the hawtio.role system property in the <broker_instance_dir> /etc/artemis.profile configuration file. By default, this role is amq . Example 6.1. Using Jolokia to query the broker's version This example uses a Jolokia REST URL to find the version of a broker. The Origin flag should specify the domain name or DNS host name for the broker server. In addition, the value you specify for Origin must correspond to an entry for <allow-origin> in your Jolokia Cross-Origin Resource Sharing (CORS) specification. USD curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\"0.0.0.0\"/Version -H "Origin: mydomain.com" {"request":{"mbean":"org.apache.activemq.artemis:broker=\"0.0.0.0\"","attribute":"Version","type":"read"},"value":"2.4.0.amq-710002-redhat-1","timestamp":1527105236,"status":200} Additional resources For more information on using a JMX-HTTP bridge, see the Jolokia documentation . For more information on assigning a user to a role, see Adding Users . For more information on specifying Jolokia Cross-Origin Resource Sharing (CORS), see section 4.1.5 of Security . 6.2.5. Subscribing to JMX management notifications If JMX is enabled in your environment, you can subscribe to management notifications. Procedure Subscribe to ObjectName org.apache.activemq.artemis:broker=" <broker-name> " . Additional resources For more information about management notifications, see Section 6.5, "Management notifications" . 6.3. Managing AMQ Broker using the JMS API The Java Message Service (JMS) API allows you to create, send, receive, and read messages. You can use JMS and the AMQ JMS client to manage brokers. 6.3.1. Configuring broker management using JMS messages and the AMQ JMS Client To use JMS to manage a broker, you must first configure the broker's management address with the manage permission. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the <management-address> element, and specify a management address. By default, the management address is activemq.management . You only need to specify a different address if you do not want to use the default. <management-address>my.management.address</management-address> Provide the management address with the manage user permission type. This permission type enables the management address to receive and handle management messages. <security-setting-match="activemq.management"> <permission-type="manage" roles="admin"/> </security-setting> 6.3.2. Managing brokers using the JMS API and AMQ JMS Client To invoke management operations using JMS messages, the AMQ JMS client must instantiate the special management queue. Procedure Create a QueueRequestor to send messages to the management address and receive replies. Create a Message . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to fill the message with the management properties. Send the message using the QueueRequestor . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to retrieve the operation result from the management reply. Example 6.2. Viewing the number of messages in a queue This example shows how to use the JMS API to view the number of messages in the JMS queue exampleQueue : Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management"); QueueSession session = ... QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, "queue.exampleQueue", "messageCount"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println("There are " + count + " messages in exampleQueue"); 6.4. Management operations Whether you are using JMX or JMS messages to manage AMQ Broker, you can use the same API management operations. Using the management API, you can manage brokers, addresses, and queues. 6.4.1. Broker management operations You can use the management API to manage your brokers. Listing, creating, deploying, and destroying queues A list of deployed queues can be retrieved using the getQueueNames() method. Queues can be created or destroyed using the management operations createQueue() , deployQueue() , or destroyQueue() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). createQueue will fail if the queue already exists while deployQueue will do nothing. Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. Listing and closing remote connections Retrieve a client's remote addresses by using listRemoteAddresses() . It is also possible to close the connections associated with a remote address using the closeConnectionsForAddress() method. Alternatively, list connection IDs using listConnectionIDs() and list all the sessions for a given connection ID using listSessions() . Managing transactions In case of a broker crash, when the broker restarts, some transactions might require manual intervention. Use the the following methods to help resolve issues you encounter. List the transactions which are in the prepared states (the transactions are represented as opaque Base64 Strings) using the listPreparedTransactions() method lists. Commit or rollback a given prepared transaction using commitPreparedTransaction() or rollbackPreparedTransaction() to resolve heuristic transactions. List heuristically completed transactions using the listHeuristicCommittedTransactions() and listHeuristicRolledBackTransactions methods. Enabling and resetting message counters Enable and disable message counters using the enableMessageCounters() or disableMessageCounters() method. Reset message counters by using the resetAllMessageCounters() and resetAllMessageCounterHistories() methods. Retrieving broker configuration and attributes The ActiveMQServerControl exposes the broker's configuration through all its attributes (for example, getVersion() method to retrieve the broker's version, and so on). Listing, creating, and destroying Core Bridge and diverts List deployed Core Bridge and diverts using the getBridgeNames() and getDivertNames() methods respectively. Create or destroy using bridges and diverts using createBridge() and destroyBridge() or createDivert() and destroyDivert() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). Stopping the broker and forcing failover to occur with any currently attached clients Use the forceFailover() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Note Because this method actually stops the broker, you will likely receive an error. The exact error depends on the management service you used to call the method. 6.4.2. Address management operations You can use the management API to manage addresses. Manage addresses using the AddressControl class with ObjectName org.apache.activemq.artemis:broker=" <broker-name> ", component=addresses,address=" <address-name> " or the resource name address. <address-name> . Modify roles and permissions for an address using the addRole() or removeRole() methods. You can list all the roles associated with the queue with the getRoles() method. 6.4.3. Queue management operations You can use the management API to manage queues. The core management API deals with queues. The QueueControl class defines the queue management operations (with the ObjectName , org.apache.activemq.artemis:broker=" <broker-name> ",component=addresses,address=" <bound-address> ",subcomponent=queues,routing-type=" <routing-type> ",queue=" <queue-name> " or the resource name queue. <queue-name> ). Most of the management operations on queues take either a single message ID (for example, to remove a single message) or a filter (for example, to expire all messages with a given property). Expiring, sending to a dead letter address, and moving messages Expire messages from a queue using the expireMessages() method. If an expiry address is defined, messages are sent to this address, otherwise they are discarded. You can define the expiry address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Send messages to a dead letter address using the sendMessagesToDeadLetterAddress() method. This method returns the number of messages sent to the dead letter address. If a dead letter address is defined, messages are sent to this address, otherwise they are removed from the queue and discarded. You can define the dead letter address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Move messages from one queue to another using the moveMessages() method. Listing and removing messages List messages from a queue using the listMessages() method. It will return an array of Map , one Map for each message. Remove messages from a queue using the removeMessages() method, which returns a boolean for the single message ID variant or the number of removed messages for the filter variant. This method takes a filter argument to remove only filtered messages. Setting the filter to an empty string will in effect remove all messages. Counting messages The number of messages in a queue is returned by the getMessageCount() method. Alternatively, the countMessages() will return the number of messages in the queue which match a given filter. Changing message priority The message priority can be changed by using the changeMessagesPriority() method which returns a boolean for the single message ID variant or the number of updated messages for the filter variant. Message counters Message counters can be listed for a queue with the listMessageCounter() and listMessageCounterHistory() methods (see Section 6.6, "Using message counters" ). The message counters can also be reset for a single queue using the resetMessageCounter() method. Retrieving the queue attributes The QueueControl exposes queue settings through its attributes (for example, getFilter() to retrieve the queue's filter if it was created with one, isDurable() to know whether the queue is durable, and so on). Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. 6.4.4. Remote resource management operations You can use the management API to start and stop a broker's remote resources (acceptors, diverts, bridges, and so on) so that the broker can be taken offline for a given period of time without stopping completely. Acceptors Start or stop an acceptor using the start() or. stop() method on the AcceptorControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=acceptors,name=" <acceptor-name> " or the resource name acceptor. <address-name> ). Acceptor parameters can be retrieved using the AcceptorControl attributes. See Network Connections: Acceptors and Connectors for more information about Acceptors. Diverts Start or stop a divert using the start() or stop() method on the DivertControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=diverts,name=" <divert-name> " or the resource name divert. <divert-name> ). Divert parameters can be retrieved using the DivertControl attributes. Bridges Start or stop a bridge using the start() (resp. stop() ) method on the BridgeControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=bridge,name=" <bridge-name> " or the resource name bridge. <bridge-name> ). Bridge parameters can be retrieved using the BridgeControl attributes. Broadcast groups Start or stop a broadcast group using the start() or stop() method on the BroadcastGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=broadcast-group,name=" <broadcast-group-name> " or the resource name broadcastgroup. <broadcast-group-name> ). Broadcast group parameters can be retrieved using the BroadcastGroupControl attributes. See Broker discovery methods for more information. Discovery groups Start or stop a discovery group using the start() or stop() method on the DiscoveryGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=discovery-group,name=" <discovery-group-name> " or the resource name discovery. <discovery-group-name> ). Discovery groups parameters can be retrieved using the DiscoveryGroupControl attributes. See Broker discovery methods for more information. Cluster connections Start or stop a cluster connection using the start() or stop() method on the ClusterConnectionControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=cluster-connection,name=" <cluster-connection-name> " or the resource name clusterconnection. <cluster-connection-name> ). Cluster connection parameters can be retrieved using the ClusterConnectionControl attributes. See Creating a broker cluster for more information. 6.5. Management notifications Below is a list of all the different kinds of notifications as well as which headers are on the messages. Every notification has a _AMQ_NotifType (value noted in parentheses) and _AMQ_NotifTimestamp header. The time stamp is the unformatted result of a call to java.lang.System.currentTimeMillis() . Notification type Headers BINDING_ADDED (0) _AMQ_Binding_Type _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString BINDING_REMOVED (1) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString CONSUMER_CREATED (2) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString CONSUMER_CLOSED (3) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString SECURITY_AUTHENTICATION_VIOLATION (6) _AMQ_User SECURITY_PERMISSION_VIOLATION (7) _AMQ_Address _AMQ_CheckType _AMQ_User DISCOVERY_GROUP_STARTED (8) name DISCOVERY_GROUP_STOPPED (9) name BROADCAST_GROUP_STARTED (10) name BROADCAST_GROUP_STOPPED (11) name BRIDGE_STARTED (12) name BRIDGE_STOPPED (13) name CLUSTER_CONNECTION_STARTED (14) name CLUSTER_CONNECTION_STOPPED (15) name ACCEPTOR_STARTED (16) factory id ACCEPTOR_STOPPED (17) factory id PROPOSAL (18) _JBM_ProposalGroupId _JBM_ProposalValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance PROPOSAL_RESPONSE (19) _JBM_ProposalGroupId _JBM_ProposalValue _JBM_ProposalAltValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance CONSUMER_SLOW (21) _AMQ_Address _AMQ_ConsumerCount _AMQ_RemoteAddress _AMQ_ConnectionName _AMQ_ConsumerName _AMQ_SessionName 6.6. Using message counters You use message counters to obtain information about queues over time. This helps you to identify trends that would otherwise be difficult to see. For example, you could use message counters to determine how a particular queue is being used over time. You could also attempt to obtain this information by using the management API to query the number of messages in the queue at regular intervals, but this would not show how the queue is actually being used. The number of messages in a queue can remain constant because no clients are sending or receiving messages on it, or because the number of messages sent to the queue is equal to the number of messages consumed from it. In both of these cases, the number of messages in the queue remains the same even though it is being used in very different ways. 6.6.1. Types of message counters Message counters provide additional information about queues on a broker. count The total number of messages added to the queue since the broker was started. countDelta The number of messages added to the queue since the last message counter update. lastAckTimestamp The time stamp of the last time a message from the queue was acknowledged. lastAddTimestamp The time stamp of the last time a message was added to the queue. messageCount The current number of messages in the queue. messageCountDelta The overall number of messages added/removed from the queue since the last message counter update. For example, if messageCountDelta is -10 , then 10 messages overall have been removed from the queue. udpateTimestamp The time stamp of the last message counter update. Note You can combine message counters to determine other meaningful data as well. For example, to know specifically how many messages were consumed from the queue since the last update, you would subtract the messageCountDelta from countDelta . 6.6.2. Enabling message counters Message counters can have a small impact on the broker's memory; therefore, they are disabled by default. To use message counters, you must first enable them. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Enable message counters. <message-counter-enabled>true</message-counter-enabled> Set the message counter history and sampling period. <message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period> message-counter-max-day-history The number of days the broker should store queue metrics. The default is 10 days. message-counter-sample-period How often (in milliseconds) the broker should sample its queues to collect metrics. The default is 10000 milliseconds. 6.6.3. Retrieving message counters You can use the management API to retrieve message counters. Prerequisites Message counters must be enabled on the broker. For more information, see Section 6.6.2, "Enabling message counters" . Procedure Use the management API to retrieve message counters. // Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = ... JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format("%s message(s) in the queue (since last sample: %s)\n", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta()); Additional resources For more information about message counters, see Section 6.4.3, "Queue management operations" .
[ "org.apache.activemq.artemis:broker=\"__BROKER_NAME__\",component=addresses,address=\"exampleQueue\",subcomponent=queues,routingtype=\"anycast\",queue=\"exampleQueue\"", "org.apache.activemq.artemis.api.management.QueueControl", "<jmx-management-enabled>true</jmx-management-enabled>", "<jmx-domain>my.org.apache.activemq</jmx-domain>", "<connector connector-port=\"1099\"/>", "service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi", "curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"/Version -H \"Origin: mydomain.com\" {\"request\":{\"mbean\":\"org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"\",\"attribute\":\"Version\",\"type\":\"read\"},\"value\":\"2.4.0.amq-710002-redhat-1\",\"timestamp\":1527105236,\"status\":200}", "<management-address>my.management.address</management-address>", "<security-setting-match=\"activemq.management\"> <permission-type=\"manage\" roles=\"admin\"/> </security-setting>", "Queue managementQueue = ActiveMQJMSClient.createQueue(\"activemq.management\"); QueueSession session = QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, \"queue.exampleQueue\", \"messageCount\"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println(\"There are \" + count + \" messages in exampleQueue\");", "<message-counter-enabled>true</message-counter-enabled>", "<message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period>", "// Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format(\"%s message(s) in the queue (since last sample: %s)\\n\", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta());" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/managing_amq_broker/management-api-managing
Chapter 101. JarArtifact schema reference
Chapter 101. JarArtifact schema reference Used in: Plugin Property Property type Description type string Must be jar . url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-jarartifact-reference
Chapter 3. Features and benefits
Chapter 3. Features and benefits 3.1. Current features and benefits .NET 6.0 offers the following features and benefits. Runtime and framework libraries .NET consists of the runtime and the framework libraries as well as compilers, build tools, tools to fetch NuGet packages, and a command-line interface to tie everything together. Benefits include: Automatic memory management Type safety Delegates and lambdas Generic types Language Integrated Query (LINQ) Async programming Native interoperability Source generators .NET 6.0 supports developing applications using ASP.NET Core 6.0 and EF Core 6.0, which bring benefits such as: Lightweight and modular HTTP request pipeline Ability to host on a web server or self-host in your own process Built on .NET, which supports true side-by-side app versioning Integrated support for creating and using NuGet packages Single aligned web stack for web UI and web APIs Cloud-ready environment-based configuration Built-in support for dependency injection Tools that simplify modern web development 3.2. New features and benefits Support for 64-bit Arm (aarch 64), IBM Z, and LinuxONE (s390x) .NET 6.0 introduces support for 64-bit Arm running on Red Hat Enterprise Linux 8. .NET 6.0 introduces support for IBM Z and LinuxONE running on Red Hat Enterprise Linux 8 and OpenShift Container Platform 4.2 or later. .NET 6.0 continues to broaden its support and tools for application development in an open source environment. The latest version of .NET includes the following improvements: Support for C# 10 Support for F# 6 Single-file source programs Performance improvements in base libraries, GC and JIT Source-generators for logging and JSON Better diagnostics with dotnet-monitor
null
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/features-and-benefits_release-notes-for-dotnet-rpms
Chapter 4. Reviewing and resolving migration issues
Chapter 4. Reviewing and resolving migration issues You can review and resolve migration issues identified by the MTA plugin in the left pane. 4.1. Reviewing issues You can use the MTA plugin icons to prioritize issues based on their severity. You can see which issues have a Quick Fix automatic code replacement and which do not. The results of an analysis are displayed in a directory format, showing the hints and classifications for each application analyzed. A hint is a read-only snippet of code that contains a single issue that you should or must address before you can modernize or migrate an application. Often a Quick Fix is suggested, which you can accept or ignore. A classification is a file that has an issue but does not have any suggested Quick Fixes. You can edit a classification. Procedure In the Migration Toolkit for Applications view, select a run configuration directory in the left pane. Click Results . The modules and applications of the run configuration are displayed, with hints and classifications beneath each application. Prioritize issues based on the following icons, which are displayed to each hint: : You must fix this issue in order to migrate or modernize the application. : You might need to fix this issue in order to migrate or modernize the application Optional: To learn more about a hint, right-click it and select Show More Details . 4.2. Resolving issues You can resolve issues by doing one of the following: Using a Quick Fix to fix a code snippet that has a hint Editing the code of a file that appears in a classification 4.2.1. Using a Quick Fix You can use a Quick Fix automatic code replacement to save time and ensure consistency in resolving repetitive issues. Quick Fixes are available for many issues displayed in the Hints section of the Results directory. Procedure In the left pane, click a hint that has an error indicator. Any Quick Fixes are displayed as child folders with the Quick Fix icon ( ) on their left side. Right-click a Quick Fix and select Preview Quick Fix . The current code and the suggested change are displayed in the Preview Quick Fix window. To accept the suggested fix, click Apply Quick Fix . Optional: Right-click the issue and select Mark As Complete . A green check ( ) is displayed by the hint, replacing the error indicator. 4.2.2. Editing the code of a file You can directly edit a file displayed in the Classifications section of the Results directory. These files do not have any Quick Fixes. Procedure In the left pane, click the file you want to edit. Make any changes needed to the code and save the file. Optional: Right-click the issue and select Mark as Complete or Delete . If you select Mark as Complete , a green check ( ) is displayed by the hint, replacing the error indicator.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/intellij_idea_plugin_guide/reviewing-and-resolving-migration-issues
Chapter 11. Cephadm troubleshooting
Chapter 11. Cephadm troubleshooting As a storage administrator, you can troubleshoot the Red Hat Ceph Storage cluster. Sometimes there is a need to investigate why a Cephadm command failed or why a specific service does not run properly. 11.1. Prerequisites A running Red Hat Ceph Storage cluster. 11.2. Pause or disable cephadm If Cephadm does not behave as expected, you can pause most of the background activity with the following command: Example This stops any changes, but Cephadm periodically checks hosts to refresh it's inventory of daemons and devices. If you want to disable Cephadm completely, run the following commands: Example Note that previously deployed daemon containers continue to exist and start as they did before. To re-enable Cephadm in the cluster, run the following commands: Example 11.3. Per service and per daemon event Cephadm stores events per service and per daemon in order to aid in debugging failed daemon deployments. These events often contain relevant information: Per service Syntax Example Per daemon Syntax Example 11.4. Check cephadm logs You can monitor the Cephadm log in real time with the following command: Example You can see the last few messages with the following command: Example If you have enabled logging to files, you can see a Cephadm log file called ceph.cephadm.log on the monitor hosts. 11.5. Gather log files You can use the journalctl command, to gather the log files for all the daemons. Note You have to run all these commands outside the cephadm shell. Note By default, Cephadm stores logs in journald which means that daemon logs are no longer available in /var/log/ceph . To read the log file of a specific daemon, run the following command: Syntax Example Note This command works when run on the same hosts where the daemon is running. To read the log file of a specific daemon running on a different host, run the following command: Syntax Example where fsid is the cluster ID provided by the ceph status command. To fetch all log files of all the daemons on a given host, run the following command: Syntax Example 11.6. Collect systemd status To print the state of a systemd unit, run the following command: Example 11.7. List all downloaded container images To list all the container images that are downloaded on a host, run the following command: Example 11.8. Manually run containers Cephadm writes small wrappers that runs a container. Refer to /var/lib/ceph/ CLUSTER_FSID / SERVICE_NAME /unit to run the container execution command. Analysing SSH errors If you get the following error: Example Try the following options to troubleshoot the issue: To ensure Cephadm has a SSH identity key, run the following command: Example If the above command fails, Cephadm does not have a key. To generate a SSH key, run the following command: Example Or Example To ensure that the SSH configuration is correct, run the following command: Example To verify the connection to the host, run the following command: Example Verify public key is in authorized_keys . To verify that the public key is in the authorized_keys file, run the following commands: Example 11.9. CIDR network error Classless inter domain routing (CIDR) also known as supernetting, is a method of assigning Internet Protocol (IP) addresses,FThe Cephadm log entries shows the current state that improves the efficiency of address distribution and replaces the system based on Class A, Class B and Class C networks. If you see one of the following errors: ERROR: Failed to infer CIDR network for mon ip * ; pass --skip-mon-network to configure it later Or Must set public_network config option or specify a CIDR network, ceph addrvec, or plain IP You need to run the following command: Example 11.10. Access the admin socket Each Ceph daemon provides an admin socket that bypasses the MONs. To access the admin socket, enter the daemon container on the host: Example 11.11. Manually deploying a mgr daemon Cephadm requires a mgr daemon in order to manage the Red Hat Ceph Storage cluster. In case the last mgr daemon of a Red Hat Ceph Storage cluster was removed, you can manually deploy a mgr daemon, on a random host of the Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example Disable the Cephadm scheduler to prevent Cephadm from removing the new MGR daemon, with the following command: Example Get or create the auth entry for the new MGR daemon: Example Open ceph.conf file: Example Get the container image: Example Create a config-json.json file and add the following: Note Use the values from the output of the ceph config generate-minimal-conf command. Example Exit from the Cephadm shell: Example Deploy the MGR daemon: Example Verification In the Cephadm shell, run the following command: Example You can see a new mgr daemon has been added.
[ "ceph orch pause", "ceph orch set backend '' ceph mgr module disable cephadm", "ceph mgr module enable cephadm ceph orch set backend cephadm", "ceph orch ls --service_name SERVICE_NAME --format yaml", "ceph orch ls --service_name alertmanager --format yaml service_type: alertmanager service_name: alertmanager placement: hosts: - unknown_host status: running: 1 size: 1 events: - 2021-02-01T08:58:02.741162 service:alertmanager [INFO] \"service was created\" - '2021-02-01T12:09:25.264584 service:alertmanager [ERROR] \"Failed to apply: Cannot place <AlertManagerSpec for service_name=alertmanager> on unknown_host: Unknown hosts\"'", "ceph orch ps --service-name SERVICE_NAME --daemon-id DAEMON_ID --format yaml", "ceph orch ps --service-name mds --daemon-id cephfs.hostname.ppdhsz --format yaml daemon_type: mds daemon_id: cephfs.hostname.ppdhsz hostname: hostname status_desc: running events: - 2021-02-01T08:59:43.845866 daemon:mds.cephfs.hostname.ppdhsz [INFO] \"Reconfigured mds.cephfs.hostname.ppdhsz on host 'hostname'\"", "ceph -W cephadm", "ceph log last cephadm", "cephadm logs --name DAEMON_NAME", "cephadm logs --name cephfs.hostname.ppdhsz", "cephadm logs --fsid FSID --name DAEMON_NAME", "cephadm logs --fsid 2d2fd136-6df1-11ea-ae74-002590e526e8 --name cephfs.hostname.ppdhsz", "for name in USD(cephadm ls | python3 -c \"import sys, json; [print(i['name']) for i in json.load(sys.stdin)]\") ; do cephadm logs --fsid FSID_OF_CLUSTER --name \"USDname\" > USDname; done", "for name in USD(cephadm ls | python3 -c \"import sys, json; [print(i['name']) for i in json.load(sys.stdin)]\") ; do cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name \"USDname\" > USDname; done", "[root@host01 ~]USD systemctl status [email protected]", "podman ps -a --format json | jq '.[].Image' \"docker.io/library/rhel8\" \"registry.redhat.io/rhceph-alpha/rhceph-5-rhel8@sha256:9aaea414e2c263216f3cdcb7a096f57c3adf6125ec9f4b0f5f65fa8c43987155\"", "execnet.gateway_bootstrap.HostNotFound: -F /tmp/cephadm-conf-73z09u6g -i /tmp/cephadm-identity-ky7ahp_5 [email protected] raise OrchestratorError(msg) from e orchestrator._interface.OrchestratorError: Failed to connect to 10.10.1.2 (10.10.1.2). Please make sure that the host is reachable and accepts connections using the cephadm SSH key", "ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_private_key INFO:cephadm:Inferring fsid f8edc08a-7f17-11ea-8707-000c2915dd98 INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15 obtained 'mgr/cephadm/ssh_identity_key' chmod 0600 ~/cephadm_private_key", "chmod 0600 ~/cephadm_private_key", "cat ~/cephadm_private_key | ceph cephadm set-ssk-key -i-", "ceph cephadm get-ssh-config", "ssh -F config -i ~/cephadm_private_key root@host01", "ceph cephadm get-pub-key grep \"`cat ~/ceph.pub`\" /root/.ssh/authorized_keys", "ceph config set host public_network hostnetwork", "cephadm enter --name cephfs.hostname.ppdhsz ceph --admin-daemon /var/run/ceph/ceph-cephfs.hostname.ppdhsz.asok config show", "cephadm shell", "ceph config-key set mgr/cephadm/pause true", "ceph auth get-or-create mgr.host01.smfvfd1 mon \"profile mgr\" osd \"allow *\" mds \"allow *\" [mgr.host01.smfvfd1] key = AQDhcORgW8toCRAAlMzlqWXnh3cGRjqYEa9ikw==", "ceph config generate-minimal-conf minimal ceph.conf for 8c9b0072-67ca-11eb-af06-001a4a0002a0 [global] fsid = 8c9b0072-67ca-11eb-af06-001a4a0002a0 mon_host = [v2:10.10.200.10:3300/0,v1:10.10.200.10:6789/0] [v2:10.10.10.100:3300/0,v1:10.10.200.100:6789/0]", "ceph config get \"mgr.host01.smfvfd1\" container_image", "{ { \"config\": \"# minimal ceph.conf for 8c9b0072-67ca-11eb-af06-001a4a0002a0\\n[global]\\n\\tfsid = 8c9b0072-67ca-11eb-af06-001a4a0002a0\\n\\tmon_host = [v2:10.10.200.10:3300/0,v1:10.10.200.10:6789/0] [v2:10.10.10.100:3300/0,v1:10.10.200.100:6789/0]\\n\", \"keyring\": \"[mgr.Ceph5-2.smfvfd1]\\n\\tkey = AQDhcORgW8toCRAAlMzlqWXnh3cGRjqYEa9ikw==\\n\" } }", "exit", "cephadm --image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest deploy --fsid 8c9b0072-67ca-11eb-af06-001a4a0002a0 --name mgr.host01.smfvfd1 --config-json config-json.json", "ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/administration_guide/cephadm-troubleshooting
Deploying Red Hat Enterprise Linux 7 on public cloud platforms
Deploying Red Hat Enterprise Linux 7 on public cloud platforms Red Hat Enterprise Linux 7 Creating custom Red Hat Enterprise Linux images and configuring a Red Hat High Availability cluster for public cloud platforms Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_distributed_compute_node_dcn_architecture/proc_providing-feedback-on-red-hat-documentation
5.4. Stopping JBoss Data Virtualization
5.4. Stopping JBoss Data Virtualization To stop JBoss Data Virtualization , you must stop the JBoss EAP server . The way you stop JBoss EAP depends on how it was started. You can stop JBoss EAP by pressing CTRL+C in the terminal.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/stopping_jboss_data_virtualization
4.18. RHEA-2012:0823 - new package: subscription-manager-migration-data
4.18. RHEA-2012:0823 - new package: subscription-manager-migration-data A new subscription-manager-migration-data package is now available for Red Hat Enterprise Linux 6. The new Subscription Management tooling allows users to understand the specific products, which have been installed on their machines, and the specific subscriptions, which their machines consume. This enhancement update adds the subscription-manager-migration-data package to Red Hat Enterprise Linux 6. The package allows for migrations from Red Hat Network Classic Hosted to hosted certificate-based subscription management. (BZ# 773030 ) All users who require subscription-manager-migration-data are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/subscription-manager-migration-data
Chapter 11. Uninstalling AMQ Streams
Chapter 11. Uninstalling AMQ Streams You can uninstall AMQ Streams on OpenShift 4.6 to 4.10 from the OperatorHub using the OpenShift Container Platform web console or CLI. Use the same approach you used to install AMQ Streams. When you uninstall AMQ Streams, you will need to identify resources created specifically for a deployment and referenced from the AMQ Streams resource. Such resources include: Secrets (Custom CAs and certificates, Kafka Connect secrets, and other Kafka secrets) Logging ConfigMaps (of type external ) These are resources referenced by Kafka , KafkaConnect , KafkaMirrorMaker , or KafkaBridge configuration. Warning Deleting CustomResourceDefinitions results in the garbage collection of the corresponding custom resources ( Kafka , KafkaConnect , KafkaMirrorMaker , or KafkaBridge ) and the resources dependent on them (Deployments, StatefulSets, and other dependent resources). 11.1. Uninstalling AMQ Streams from the OperatorHub using the web console This procedure describes how to uninstall AMQ Streams from the OperatorHub and remove resources related to the deployment. You can perform the steps from the console or use alternative CLI commands. Prerequisites Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled AMQ Streams. Command to find resources related to an AMQ Streams deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Navigate in the OpenShift web console to Operators > Installed Operators . For the installed Red Hat Integration - AMQ Streams operator, select the options icon (three vertical dots) and click Uninstall Operator . The operator is removed from Installed Operators . Navigate to Home > Projects and select the project where you installed AMQ Streams and the Kafka components. Click the options under Inventory to delete related resources. Resources include the following: Deployments StatefulSets Pods Services ConfigMaps Secrets Tip Use the search to find related resources that begin with the name of the Kafka cluster. You can also find the resources under Workloads . Alternative CLI commands You can use CLI commands to uninstall AMQ Streams from the OperatorHub. Delete the AMQ Streams subscription. oc delete subscription amq-streams -n openshift-operators Delete the cluster service version (CSV). oc delete csv amqstreams. <version> -n openshift-operators Remove related CRDs. oc get crd -l app=strimzi -o name | xargs oc delete 11.2. Uninstalling AMQ Streams using the CLI This procedure describes how to use the oc command-line tool to uninstall AMQ Streams and remove resources related to the deployment. Prerequisites You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled AMQ Streams. Command to find resources related to an AMQ Streams deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Delete the Cluster Operator Deployment , related CustomResourceDefinitions , and RBAC resources. Specify the installation files used to deploy the Cluster Operator. oc delete -f amq-streams- <version> /install/cluster-operator Delete the resources you identified in the prerequisites. oc delete <resource_type> <resource_name> -n <namespace> Replace <resource_type> with the type of resource you are deleting and <resource_name> with the name of the resource. Example to delete a secret oc delete secret my-cluster-clients-ca -n my-project
[ "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete subscription amq-streams -n openshift-operators", "delete csv amqstreams. <version> -n openshift-operators", "get crd -l app=strimzi -o name | xargs oc delete", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete -f amq-streams- <version> /install/cluster-operator", "delete <resource_type> <resource_name> -n <namespace>", "delete secret my-cluster-clients-ca -n my-project" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-uninstalling-str
Chapter 5. Using build strategies
Chapter 5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 5.1. Docker build OpenShift Dedicated uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 5.1.1. Replacing the Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object, add the following settings to the BuildConfig object: strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure Set the dockerfilePath field for the build to use a different path to locate your Dockerfile: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 5.1.4. Adding Docker build arguments You can set Docker build arguments using the buildArgs array. The build arguments are passed to Docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set Docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "version" value: "latest" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 5.1.5. Squashing layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 5.1.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 1 5 Required. A unique name. 2 6 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. Additional resources Build inputs Input secrets and config maps 5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, complete one of the following actions: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition in the BuildConfig object. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 The build process appends run , assemble , and save-artifacts to the path. If any or all scripts with these names exist, the build process uses these scripts in place of scripts with the same name that are provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image: environment files and BuildConfig environment values. The variables that you provide using either method will be present during the build process and in the output image. 5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 5.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 5.2.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 1 5 Required. A unique name. 2 6 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. Additional resources Build inputs Input secrets and config maps 5.3. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton. Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Dedicated in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 5.3.1. Understanding OpenShift Dedicated pipelines Important The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton. Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on OpenShift Dedicated. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the OpenShift Dedicated Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario. OpenShift Dedicated Jenkins Sync Plugin The OpenShift Dedicated Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the OpenShift Dedicated web console. Integration with the Jenkins Git plugin, which passes commit information from OpenShift Dedicated builds to the Jenkins Git plugin. Synchronization of secrets into Jenkins credential entries. OpenShift Dedicated Jenkins Client Plugin The OpenShift Dedicated Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Dedicated API Server. The plugin uses the OpenShift Dedicated command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Dedicated DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Dedicated Jenkins image. For OpenShift Dedicated Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.3.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton. Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Dedicated client binaries available if you intend to use the OpenShift Dedicated DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.3.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton. Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 5.3.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 5.3.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in OpenShift Dedicated 4. Equivalent and improved functionality is present in the OpenShift Dedicated Pipelines based on Tekton. Jenkins images on OpenShift Dedicated are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an OpenShift Dedicated Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline After you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the OpenShift Dedicated DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your OpenShift Dedicated cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the OpenShift Dedicated web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the OpenShift Dedicated web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 5.4. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the OpenShift Dedicated web console: Create a new OpenShift Dedicated project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 5.5. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration.
[ "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_buildconfig/build-strategies
Chapter 2. Acknowledgments
Chapter 2. Acknowledgments Red Hat Ceph Storage version 7.1 contains many contributions from the Red Hat Ceph Storage team. In addition, the Ceph project is seeing amazing growth in the quality and quantity of contributions from individuals and organizations in the Ceph community. We would like to thank all members of the Red Hat Ceph Storage team, all of the individual contributors in the Ceph community, and additionally, but not limited to, the contributions from organizations such as: Intel(R) Fujitsu (R) UnitedStack Yahoo TM Ubuntu Kylin Mellanox (R) CERN TM Deutsche Telekom Mirantis (R) SanDisk TM SUSE (R)
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/7.1_release_notes/acknowledgments
Chapter 17. Random functions Tapset
Chapter 17. Random functions Tapset These functions deal with random number generation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/random.stp
Chapter 6. aggregate
Chapter 6. aggregate This chapter describes the commands under the aggregate command. 6.1. aggregate add host Add host to aggregate Usage: Table 6.1. Positional arguments Value Summary <aggregate> Aggregate (name or id) <host> Host to add to <aggregate> Table 6.2. Command arguments Value Summary -h, --help Show this help message and exit Table 6.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.2. aggregate cache image Request image caching for aggregate Usage: Table 6.7. Positional arguments Value Summary <aggregate> Aggregate (name or id) <image> Image id to request caching for aggregate (name or id). may be specified multiple times. Table 6.8. Command arguments Value Summary -h, --help Show this help message and exit 6.3. aggregate create Create a new aggregate Usage: Table 6.9. Positional arguments Value Summary <name> New aggregate name Table 6.10. Command arguments Value Summary -h, --help Show this help message and exit --zone <availability-zone> Availability zone name --property <key=value> Property to add to this aggregate (repeat option to set multiple properties) Table 6.11. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.13. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.4. aggregate delete Delete existing aggregate(s) Usage: Table 6.15. Positional arguments Value Summary <aggregate> Aggregate(s) to delete (name or id) Table 6.16. Command arguments Value Summary -h, --help Show this help message and exit 6.5. aggregate list List all aggregates Usage: Table 6.17. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 6.18. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 6.19. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 6.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.6. aggregate remove host Remove host from aggregate Usage: Table 6.22. Positional arguments Value Summary <aggregate> Aggregate (name or id) <host> Host to remove from <aggregate> Table 6.23. Command arguments Value Summary -h, --help Show this help message and exit Table 6.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.7. aggregate set Set aggregate properties Usage: Table 6.28. Positional arguments Value Summary <aggregate> Aggregate to modify (name or id) Table 6.29. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set aggregate name --zone <availability-zone> Set availability zone name --property <key=value> Property to set on <aggregate> (repeat option to set multiple properties) --no-property Remove all properties from <aggregate> (specify both --property and --no-property to overwrite the current properties) 6.8. aggregate show Display aggregate details Usage: Table 6.30. Positional arguments Value Summary <aggregate> Aggregate to display (name or id) Table 6.31. Command arguments Value Summary -h, --help Show this help message and exit Table 6.32. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 6.33. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 6.34. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 6.35. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 6.9. aggregate unset Unset aggregate properties Usage: Table 6.36. Positional arguments Value Summary <aggregate> Aggregate to modify (name or id) Table 6.37. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from aggregate (repeat option to remove multiple properties)
[ "openstack aggregate add host [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <aggregate> <host>", "openstack aggregate cache image [-h] <aggregate> <image> [<image> ...]", "openstack aggregate create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--zone <availability-zone>] [--property <key=value>] <name>", "openstack aggregate delete [-h] <aggregate> [<aggregate> ...]", "openstack aggregate list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]", "openstack aggregate remove host [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <aggregate> <host>", "openstack aggregate set [-h] [--name <name>] [--zone <availability-zone>] [--property <key=value>] [--no-property] <aggregate>", "openstack aggregate show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <aggregate>", "openstack aggregate unset [-h] [--property <key>] <aggregate>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/aggregate
6.4. Bridged Networking
6.4. Bridged Networking Bridged networking (also known as network bridging or virtual network switching) is used to place virtual machine network interfaces on the same network as the physical interface. Bridges require minimal configuration and make a virtual machine appear on an existing network, which reduces management overhead and network complexity. As bridges contain few components and configuration variables, they provide a transparent setup which is straightforward to understand and troubleshoot, if required. Bridging can be configured in a virtualized environment using standard Red Hat Enterprise Linux tools, virt-manager , or libvirt , and is described in the following sections. However, even in a virtualized environment, bridges may be more easily created using the host operating system's networking tools. More information about this bridge creation method can be found in the Red Hat Enterprise Linux 7 Networking Guide . 6.4.1. Configuring Bridged Networking on a Red Hat Enterprise Linux 7 Host Bridged networking can be configured for virtual machines on a Red Hat Enterprise Linux host, independent of the virtualization management tools. This configuration is mainly recommended when the virtualization bridge is the host's only network interface, or is the host's management network interface. For instructions on configuring network bridging without using virtualization tools, see the Red Hat Enterprise Linux 7 Networking Guide . 6.4.2. Bridged Networking with Virtual Machine Manager This section provides instructions on creating a bridge from a host machine's interface to a guest virtual machine using virt-manager . Note Depending on your environment, setting up a bridge with libvirt tools in Red Hat Enterprise Linux 7 may require disabling Network Manager, which is not recommended by Red Hat. A bridge created with libvirt also requires libvirtd to be running for the bridge to maintain network connectivity. It is recommended to configure bridged networking on the physical Red Hat Enterprise Linux host as described in the Red Hat Enterprise Linux 7 Networking Guide , while using libvirt after bridge creation to add virtual machine interfaces to the bridges. Procedure 6.1. Creating a bridge with virt-manager From the virt-manager main menu, click Edit ⇒ Connection Details to open the Connection Details window. Click the Network Interfaces tab. Click the + at the bottom of the window to configure a new network interface. In the Interface type drop-down menu, select Bridge , and then click Forward to continue. Figure 6.1. Adding a bridge In the Name field, enter a name for the bridge, such as br0 . Select a Start mode from the drop-down menu. Choose from one of the following: none - deactivates the bridge onboot - activates the bridge on the guest virtual machine reboot hotplug - activates the bridge even if the guest virtual machine is running Check the Activate now check box to activate the bridge immediately. To configure either the IP settings or Bridge settings , click the appropriate Configure button. A separate window will open to specify the required settings. Make any necessary changes and click OK when done. Select the physical interface to connect to your virtual machines. If the interface is currently in use by another guest virtual machine, you will receive a warning message. Click Finish and the wizard closes, taking you back to the Connections menu. Figure 6.2. Adding a bridge Select the bridge to use, and click Apply to exit the wizard. To stop the interface, click the Stop Interface key. Once the bridge is stopped, to delete the interface, click the Delete Interface key. 6.4.3. Bridged Networking with libvirt Depending on your environment, setting up a bridge with libvirt in Red Hat Enterprise Linux 7 may require disabling Network Manager, which is not recommended by Red Hat. This also requires libvirtd to be running for the bridge to operate. It is recommended to configure bridged networking on the physical Red Hat Enterprise Linux host as described in the Red Hat Enterprise Linux 7 Networking Guide . Important libvirt is now able to take advantage of new kernel tunable parameters to manage host bridge forwarding database (FDB) entries, thus potentially improving system network performance when bridging multiple virtual machines. Set the macTableManager attribute of a network's <bridge> element to 'libvirt' in the host's XML configuration file: This will turn off learning (flood) mode on all bridge ports, and libvirt will add or remove entries to the FDB as necessary. Along with removing the overhead of learning the proper forwarding ports for MAC addresses, this also allows the kernel to disable promiscuous mode on the physical device that connects the bridge to the network, which further reduces overhead.
[ "<bridge name='br0' macTableManager='libvirt'/>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-network_configuration-bridged_networking
Chapter 5. Upgrading CodeReady Workspaces
Chapter 5. Upgrading CodeReady Workspaces This chapter describes how to upgrade a CodeReady Workspaces instance to CodeReady Workspaces 2.1. 5.1. Upgrading CodeReady Workspaces using OperatorHub This section describes how to upgrade from CodeReady Workspaces 2.0 to CodeReady Workspaces 2.1 on OpenShift 4 using the OpenShift web console. This method is using the Operator from OperatorHub. Prerequisites An administrator account on an OpenShift 4 instance. An instance of CodeReady Workspaces 2.0, running on the same instance of OpenShift 4, installed using an Operator from OperatorHub. Procedure Open the OpenShift web console. Navigate to the Operators Installed Operators section. Click Red Hat CodeReady Workspaces in the list of installed operators. Navigate to the Subscription tab and enable the following options: Channel : latest Approval : Automatic Verification steps Log in to the CodeReady Workspaces instance. The 2.1 version number is visible at the bottom of the page. 5.2. Upgrading CodeReady Workspaces using CLI management tool on OpenShift 3 This section describes how to upgrade from CodeReady Workspaces 2.0 to CodeReady Workspaces 2.1 on OpenShift 3 using the CLI management tool. Prerequisites An administrative account on an OpenShift 3 instance. A running instance of Red Hat CodeReady Workspaces running on OpenShift 3, installed using the CLI management tool. The crwctl management tool installed. Procedure In all running workspaces in the CodeReady Workspaces 2.0 instance, save and push changes to Git repositories. Run the following command: Verification steps Log in to the CodeReady Workspaces instance. The 2.1 version number is visible at the bottom of the page. 5.3. Upgrading CodeReady Workspaces from major version This sections describes how to perform an upgrade from the major version of Red Hat CodeReady Workspaces (1.2). Procedure See Upgrading CodeReady Workspaces section in CodeReady Workspaces 2.0 Installation Guide
[ "crwctl server:update" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/installation_guide/upgrading-codeready-workspaces_crw
Chapter 10. DeploymentConfigRollback [apps.openshift.io/v1]
Chapter 10. DeploymentConfigRollback [apps.openshift.io/v1] Description DeploymentConfigRollback provides the input to rollback generation. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required name spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the deployment config that will be rolled back. spec object DeploymentConfigRollbackSpec represents the options for rollback generation. updatedAnnotations object (string) UpdatedAnnotations is a set of new annotations that will be added in the deployment config. 10.1.1. .spec Description DeploymentConfigRollbackSpec represents the options for rollback generation. Type object Required from includeTriggers includeTemplate includeReplicationMeta includeStrategy Property Type Description from ObjectReference From points to a ReplicationController which is a deployment. includeReplicationMeta boolean IncludeReplicationMeta specifies whether to include the replica count and selector. includeStrategy boolean IncludeStrategy specifies whether to include the deployment Strategy. includeTemplate boolean IncludeTemplate specifies whether to include the PodTemplateSpec. includeTriggers boolean IncludeTriggers specifies whether to include config Triggers. revision integer Revision to rollback to. If set to 0, rollback to the last revision. 10.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/rollback POST : create rollback of a DeploymentConfig 10.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/rollback Table 10.1. Global path parameters Parameter Type Description name string name of the DeploymentConfigRollback namespace string object name and auth scope, such as for teams and projects Table 10.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create rollback of a DeploymentConfig Table 10.3. Body parameters Parameter Type Description body DeploymentConfigRollback schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigRollback schema 201 - Created DeploymentConfigRollback schema 202 - Accepted DeploymentConfigRollback schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/deploymentconfigrollback-apps-openshift-io-v1
Chapter 3. Setting compute resource quota for OpenShift Pipelines
Chapter 3. Setting compute resource quota for OpenShift Pipelines A ResourceQuota object in Red Hat OpenShift Pipelines controls the total resource consumption per namespace. You can use it to limit the quantity of objects created in a namespace, based on the type of the object. In addition, you can specify a compute resource quota to restrict the total amount of compute resources consumed in a namespace. However, you might want to limit the amount of compute resources consumed by pods resulting from a pipeline run, rather than setting quotas for the entire namespace. Currently, Red Hat OpenShift Pipelines does not enable you to directly specify the compute resource quota for a pipeline. 3.1. Alternative approaches for limiting compute resource consumption in OpenShift Pipelines To attain some degree of control over the usage of compute resources by a pipeline, consider the following alternative approaches: Set resource requests and limits for each step in a task. Example: Set resource requests and limits for each step in a task. ... spec: steps: - name: step-with-limts computeResources: requests: memory: 1Gi cpu: 500m limits: memory: 2Gi cpu: 800m ... Set resource limits by specifying values for the LimitRange object. For more information on LimitRange , refer to Restrict resource consumption with limit ranges . Reduce pipeline resource consumption . Set and manage resource quotas per project . Ideally, the compute resource quota for a pipeline should be same as the total amount of compute resources consumed by the concurrently running pods in a pipeline run. However, the pods running the tasks consume compute resources based on the use case. For example, a Maven build task might require different compute resources for different applications that it builds. As a result, you cannot predetermine the compute resource quotas for tasks in a generic pipeline. For greater predictability and control over usage of compute resources, use customized pipelines for different applications. Note When using Red Hat OpenShift Pipelines in a namespace configured with a ResourceQuota object, the pods resulting from task runs and pipeline runs might fail with an error, such as: failed quota: <quota name> must specify cpu, memory . To avoid this error, do any one of the following: (Recommended) Specify a limit range for the namespace. Explicitly define requests and limits for all containers. For more information, refer to the issue and the resolution . If your use case is not addressed by these approaches, you can implement a workaround by using a resource quota for a priority class. 3.2. Specifying pipelines resource quota using priority class A PriorityClass object maps priority class names to the integer values that indicates their relative priorities. Higher values increase the priority of a class. After you create a priority class, you can create pods that specify the priority class name in their specifications. In addition, you can control a pod's consumption of system resources based on the pod's priority. Specifying resource quota for a pipeline is similar to setting a resource quota for the subset of pods created by a pipeline run. The following steps provide an example of the workaround by specifying resource quota based on priority class. Procedure Create a priority class for a pipeline. Example: Priority class for a pipeline apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: pipeline1-pc value: 1000000 description: "Priority class for pipeline1" Create a resource quota for a pipeline. Example: Resource quota for a pipeline apiVersion: v1 kind: ResourceQuota metadata: name: pipeline1-rq spec: hard: cpu: "1000" memory: 200Gi pods: "10" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: ["pipeline1-pc"] Verify the resource quota usage for the pipeline. Example: Verify resource quota usage for the pipeline USD oc describe quota Sample output Because pods are not running, the quota is unused. Create the pipelines and tasks. Example: YAML for the pipeline apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: maven-build spec: params: - name: GIT_URL workspaces: - name: local-maven-repo - name: source tasks: - name: git-clone taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: source params: - name: URL value: USD(params.GIT_URL) - name: build taskRef: name: mvn runAfter: ["git-clone"] params: - name: GOALS value: ["package"] workspaces: - name: maven-repo workspace: local-maven-repo - name: source workspace: source - name: int-test taskRef: name: mvn runAfter: ["build"] params: - name: GOALS value: ["verify"] workspaces: - name: maven-repo workspace: local-maven-repo - name: source workspace: source - name: gen-report taskRef: name: mvn runAfter: ["build"] params: - name: GOALS value: ["site"] workspaces: - name: maven-repo workspace: local-maven-repo - name: source workspace: source Example: YAML for a task in the pipeline apiVersion: tekton.dev/v1 kind: Task metadata: name: mvn spec: workspaces: - name: maven-repo - name: source params: - name: GOALS description: The Maven goals to run type: array default: ["package"] steps: - name: mvn image: gcr.io/cloud-builders/mvn workingDir: USD(workspaces.source.path) command: ["/usr/bin/mvn"] args: - -Dmaven.repo.local=USD(workspaces.maven-repo.path) - "USD(params.GOALS)" Create and start the pipeline run. Example: YAML for a pipeline run apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: petclinic-run- spec: pipelineRef: name: maven-build params: - name: GIT_URL value: https://github.com/spring-projects/spring-petclinic taskRunTemplate: podTemplate: priorityClassName: pipeline1-pc workspaces: - name: local-maven-repo emptyDir: {} - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 200M Note The pipeline run might fail with an error: failed quota: <quota name> must specify cpu, memory . To avoid this error, set a limit range for the namespace, where the defaults from the LimitRange object apply to pods created during the build process. For more information about setting limit ranges, refer to Restrict resource consumption with limit ranges in the Additional resources section. Note Since OpenShift Pipelines 1.17, the priority class that you set for a task applies to all pods created for the task, including the affinity assistant pod that OpenShift Pipelines creates in order to ensure that the task executes on a particular node. After the pods are created, verify the resource quota usage for the pipeline run. Example: Verify resource quota usage for the pipeline USD oc describe quota Sample output The output indicates that you can manage the combined resource quota for all concurrent running pods belonging to a priority class, by specifying the resource quota per priority class. 3.3. Additional resources Restrict resource consumption with limit ranges Resource quotas in Kubernetes Limit ranges in Kubernetes Resource requests and limits in Kubernetes
[ "spec: steps: - name: step-with-limts computeResources: requests: memory: 1Gi cpu: 500m limits: memory: 2Gi cpu: 800m", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: pipeline1-pc value: 1000000 description: \"Priority class for pipeline1\"", "apiVersion: v1 kind: ResourceQuota metadata: name: pipeline1-rq spec: hard: cpu: \"1000\" memory: 200Gi pods: \"10\" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [\"pipeline1-pc\"]", "oc describe quota", "Name: pipeline1-rq Namespace: default Resource Used Hard -------- ---- ---- cpu 0 1k memory 0 200Gi pods 0 10", "apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: maven-build spec: params: - name: GIT_URL workspaces: - name: local-maven-repo - name: source tasks: - name: git-clone taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: source params: - name: URL value: USD(params.GIT_URL) - name: build taskRef: name: mvn runAfter: [\"git-clone\"] params: - name: GOALS value: [\"package\"] workspaces: - name: maven-repo workspace: local-maven-repo - name: source workspace: source - name: int-test taskRef: name: mvn runAfter: [\"build\"] params: - name: GOALS value: [\"verify\"] workspaces: - name: maven-repo workspace: local-maven-repo - name: source workspace: source - name: gen-report taskRef: name: mvn runAfter: [\"build\"] params: - name: GOALS value: [\"site\"] workspaces: - name: maven-repo workspace: local-maven-repo - name: source workspace: source", "apiVersion: tekton.dev/v1 kind: Task metadata: name: mvn spec: workspaces: - name: maven-repo - name: source params: - name: GOALS description: The Maven goals to run type: array default: [\"package\"] steps: - name: mvn image: gcr.io/cloud-builders/mvn workingDir: USD(workspaces.source.path) command: [\"/usr/bin/mvn\"] args: - -Dmaven.repo.local=USD(workspaces.maven-repo.path) - \"USD(params.GOALS)\"", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: petclinic-run- spec: pipelineRef: name: maven-build params: - name: GIT_URL value: https://github.com/spring-projects/spring-petclinic taskRunTemplate: podTemplate: priorityClassName: pipeline1-pc workspaces: - name: local-maven-repo emptyDir: {} - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 200M", "oc describe quota", "Name: pipeline1-rq Namespace: default Resource Used Hard -------- ---- ---- cpu 500m 1k memory 10Gi 200Gi pods 1 10" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/managing_performance_and_resource_use/setting-compute-resource-quota-for-openshift-pipelines
8.41. cyrus-sasl
8.41. cyrus-sasl 8.41.1. RHBA-2014:1570 - cyrus-sasl bug fix and enhancement update Updated cyrus-sasl packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The cyrus-sasl packages contain the Cyrus implementation of Simple Authentication and Security Layer (SASL). SASL is a method for adding authentication support to connection-based protocols. Bug Fixes BZ# 838628 A memory leak in the Digest MD5 plugin was discovered. Specifically, the make_client_response() function did not correctly free the output buffer. Consequently, applications that used Digest MD5 with very large datasets could terminate unexpectedly. This update corrects make_client_response() and closes the memory leak. As a result, applications using Digest MD5 as part of authentication with large datasets now work as expected. BZ# 1081445 Previously, unnecessary quote characters were used in the cyrus-sasl.spec file when the user was created using the useradd command. Consequently, the Saslauth user was created with quotes in the comment field ("Saslauthd user"). With this update, unnecessary quotes have been removed from the comment field. In addition, this update adds the following Enhancement BZ# 994242 The ad_compat option has been backported to the cyrus-sasl packages from upstream. This option controls compatibility with AD or similar servers that require both integrity and confidentiality bits selected during security layer negotiation. Users of cyrus-sasl are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/cyrus-sasl
Chapter 3. Hybrid Cloud Console help options
Chapter 3. Hybrid Cloud Console help options There are several ways to get help with Red Hat services from within the Hybrid Cloud Console. You can open a support case, check the status of Red Hat web sites, and call or chat with a Red Hat support engineer. 3.1. Opening a support case You can open a Red Hat support case from the Red Hat Hybrid Cloud Console for help with the Red Hat services that are associated with your Hybrid Cloud Console account login. Procedure In the Hybrid Cloud Console , select Help ( ? icon) > Open a support case . The Customer Support page on the Red Hat Customer Portal opens. The Account and Owner fields are automatically populated with your account information. Under Select the option that best fits your reason for creating a case , select an appropriate category for your issue, and then click Continue . Select the product and version, and then click Continue . Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that you are reporting. If the suggested articles do not address the issue, click Continue . Note Articles might not be available for your issue. Enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Complete the following questions where prompted: Where are you experiencing the behavior? What environment? When does the behavior occur? Frequency? Repeatedly? At certain times? What information can you provide around time-frames and the business impact? Optional: Upload log or other diagnostic files to attach to your request in the File uploader and analyzer box. These types of files help support engineers resolve your issues quickly. The prompts under this box change depending on the product that you selected. Click Continue . Enter information in all of the prompts marked with an asterisk, and then click Continue . The summary page opens. Review the information that you entered, and then click Submit . A confirmation screen displays a case number for your issue and the message We've added your case to our queue . Verification Click Cases to view a list of all support cases. Find your case in the list. Click the case ID number to view your case. 3.2. Checking for Red Hat service outages You can access the Red Hat Status page from the Red Hat Hybrid Cloud Console Help menu ( ? icon) to find information about the current and past status of console services and other Red Hat web sites. Procedure In the Hybrid Cloud Console , click Help ( ? icon) > Status page . The Red Hat Status page opens. On the Red Hat Status page, click https://console.redhat.com to expand the list of services and check for any outages. Optional: Perform any of the following tasks: Under Site Status , view the current status of Red Hat web sites. Click Subscribe to Updates and enter your email address to receive email notifications whenever Red Hat creates, updates, or resolves an incident. Under Scheduled Maintenance , review upcoming scheduled maintenance. Under Past Incidents , review recent past outages and other incidents. Click Incident History to view incidents from the three months. 3.3. Red Hat support options The Red Hat support options page, available from the Red Hat Hybrid Cloud Console Help menu ( ? icon), provides the following options: View or open a support case. This option is also available from the Open a support case menu item under the Help menu. Live chat with a support engineer. Call a Red Hat support engineer. Access support information for your Red Hat products. Review information about support policies and programs. 3.4. Using the Hybrid Cloud Console virtual assistant The Hybrid Cloud Console virtual assistant can help with tasks such as changing your personal information, requesting access from your administrator, displaying critical vulnerabilities, and providing recommendations from Advisor. While it is not possible to retrieve the identity of your Organization Administrator from the Hybrid Cloud Console, you can use the virtual assistant to send a message to your Organization Administrator to request access to Hybrid Cloud Console services. Prerequisites You are logged in to the Hybrid Cloud Console. Procedure Click the virtual assistant icon in the lower right of the Hybrid Cloud Console screen. The virtual assistant screen expands. In the Type a message box, type your query and then press the Enter key.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/getting_started_with_the_red_hat_hybrid_cloud_console/hcc-help-options_getting-started
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/providing-feedback
Chapter 4. Installing Hosts for Red Hat Virtualization
Chapter 4. Installing Hosts for Red Hat Virtualization Red Hat Virtualization supports two types of hosts: Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts . Depending on your environment, you may want to use one type only, or both. At least two hosts are required for features such as migration and high availability. See Section 4.3, "Recommended Practices for Configuring Host Networks" for networking information. Important SELinux is in enforcing mode upon installation. To verify, run getenforce . SELinux must be in enforcing mode on all hosts and Managers for your Red Hat Virtualization environment to be supported. Table 4.1. Host Types Host Type Other Names Description Red Hat Virtualization Host RHVH, thin host This is a minimal operating system based on Red Hat Enterprise Linux. It is distributed as an ISO file from the Customer Portal and contains only the packages required for the machine to act as a host. Red Hat Enterprise Linux host RHEL host, thick host Red Hat Enterprise Linux systems with the appropriate subscriptions attached can be used as hosts. Host Compatibility When you create a new data center, you can set the compatibility version. Select the compatibility version that suits all the hosts in the data center. Once set, version regression is not allowed. For a fresh Red Hat Virtualization installation, the latest compatibility version is set in the default data center and default cluster; to use an earlier compatibility version, you must create additional data centers and clusters. For more information about compatibility versions see Red Hat Virtualization Manager Compatibility in Red Hat Virtualization Life Cycle . 4.1. Red Hat Virtualization Hosts 4.1.1. Installing Red Hat Virtualization Hosts Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See http://cockpit-project.org/running.html for the minimum browser requirements. RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements . Procedure Download the RHVH ISO image from the Customer Portal: Log in to the Customer Portal at https://access.redhat.com . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Create a bootable media device. See Making Media in the Red Hat Enterprise Linux Installation Guide for more information. Start the machine on which you are installing RHVH, booting from the prepared installation media. From the boot menu, select Install RHVH 4.3 and press Enter . Note You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. Select a language, and click Continue . Select a time zone from the Date & Time screen and click Done . Select a keyboard layout from the Keyboard screen and click Done . Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click Done . Important Red Hat strongly recommends using the Automatically configure partitioning option. Select a network from the Network & Host Name screen and click Configure... to configure the connection details. Note To use the connection every time the system boots, select the Automatically connect to this network when it is available check box. For more information, see Edit Network Connections in the Red Hat Enterprise Linux 7 Installation Guide . Enter a host name in the Host name field, and click Done . Optionally configure Language Support , Security Policy , and Kdump . See Installing Using Anaconda in the Red Hat Enterprise Linux 7 Installation Guide for more information on each of the sections in the Installation Summary screen. Click Begin Installation . Set a root password and, optionally, create an additional user while RHVH installs. Warning Red Hat strongly recommends not creating untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete the installation. Note When RHVH restarts, nodectl check performs a health check on the host and displays the result when you log in on the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information. The service is enabled by default. 4.1.2. Enabling the Red Hat Virtualization Host Repository Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network , or with Red Hat Satellite 6 . Registering RHVH with the Content Delivery Network Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Navigate to Subscriptions , click Register System , and enter your Customer Portal user name and password. The Red Hat Virtualization Host subscription is automatically attached to the system. Click Terminal . Enable the Red Hat Virtualization Host 7 repository to allow later updates to the Red Hat Virtualization Host: Registering RHVH with Red Hat Satellite 6 Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Click Terminal . Register RHVH with Red Hat Satellite 6: 4.1.3. Advanced Installation 4.1.3.1. Custom Partitioning Custom partitioning on Red Hat Virtualization Host (RHVH) is not recommended. Red Hat strongly recommends using the Automatically configure partitioning option in the Installation Destination window. If your installation requires custom partitioning, select the I will configure partitioning option during the installation, and note that the following restrictions apply: Ensure the default LVM Thin Provisioning option is selected in the Manual Partitioning window. The following directories are required and must be on thin provisioned logical volumes: root ( / ) /home /tmp /var /var/crash /var/log /var/log/audit Important Do not create a separate partition for /usr . Doing so will cause the installation to fail. /usr must be on a logical volume that is able to change versions along with RHVH, and therefore should be left on root ( / ). For information about the required storage sizes for each partition, see Section 2.2.3, "Storage Requirements" . The /boot directory should be defined as a standard partition. The /var directory must be on a separate volume or disk. Only XFS or Ext4 file systems are supported. Configuring Manual Partitioning in a Kickstart File The following example demonstrates how to configure manual partitioning in a Kickstart file. Note If you use logvol --thinpool --grow , you must also include volgroup --reserved-space or volgroup --reserved-percent to reserve space in the volume group for the thin pool to grow. 4.1.3.2. Automating Red Hat Virtualization Host Deployment You can install Red Hat Virtualization Host (RHVH) without a physical media device by booting from a PXE server over the network with a Kickstart file that contains the answers to the installation questions. General instructions for installing from a PXE server with a Kickstart file are available in the Red Hat Enterprise Linux Installation Guide , as RHVH is installed in much the same way as Red Hat Enterprise Linux. RHVH-specific instructions, with examples for deploying RHVH with Red Hat Satellite, are described below. The automated RHVH deployment has 3 stages: Section 4.1.3.2.1, "Preparing the Installation Environment" Section 4.1.3.2.2, "Configuring the PXE Server and the Boot Loader" Section 4.1.3.2.3, "Creating and Running a Kickstart File" 4.1.3.2.1. Preparing the Installation Environment Log in to the Customer Portal . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Make the RHVH ISO image available over the network. See Installation Source on a Network in the Red Hat Enterprise Linux Installation Guide . Extract the squashfs.img hypervisor image file from the RHVH ISO: Note This squashfs.img file, located in the /tmp/usr/share/redhat-virtualization-host/image/ directory, is called redhat-virtualization-host- version_number _version.squashfs.img . It contains the hypervisor image for installation on the physical machine. It should not be confused with the /LiveOS/squashfs.img file, which is used by the Anaconda inst.stage2 option. 4.1.3.2.2. Configuring the PXE Server and the Boot Loader Configure the PXE server. See Preparing for a Network Installation in the Red Hat Enterprise Linux Installation Guide . Copy the RHVH boot images to the /tftpboot directory: Create a rhvh label specifying the RHVH boot images in the boot loader configuration: RHVH Boot Loader Configuration Example for Red Hat Satellite If you are using information from Red Hat Satellite to provision the host, you must create a global or host group level parameter called rhvh_image and populate it with the directory URL where the ISO is mounted or extracted: Make the content of the RHVH ISO locally available and export it to the network, for example, using an HTTPD server: 4.1.3.2.3. Creating and Running a Kickstart File Create a Kickstart file and make it available over the network. See Kickstart Installations in the Red Hat Enterprise Linux Installation Guide . Ensure that the Kickstart file meets the following RHV-specific requirements: The %packages section is not required for RHVH. Instead, use the liveimg option and specify the redhat-virtualization-host- version_number _version.squashfs.img file from the RHVH ISO image: Autopartitioning is highly recommended: Note Thin provisioning must be used with autopartitioning. The --no-home option does not work in RHVH because /home is a required directory. If your installation requires manual partitioning, see Section 4.1.3.1, "Custom Partitioning" for a list of limitations that apply to partitions and an example of manual partitioning in a Kickstart file. A %post section that calls the nodectl init command is required: Kickstart Example for Deploying RHVH on Its Own This Kickstart example shows you how to deploy RHVH. You can include additional commands and options as required. Kickstart Example for Deploying RHVH with Registration and Network Configuration from Satellite This Kickstart example uses information from Red Hat Satellite to configure the host network and register the host to the Satellite server. You must create a global or host group level parameter called rhvh_image and populate it with the directory URL to the squashfs.img file. ntp_server1 is also a global or host group level variable. Add the Kickstart file location to the boot loader configuration file on the PXE server: Install RHVH following the instructions in Booting from the Network Using PXE in the Red Hat Enterprise Linux Installation Guide . 4.2. Red Hat Enterprise Linux hosts 4.2.1. Installing Red Hat Enterprise Linux hosts A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 7 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached. For detailed installation instructions, see the Performing a standard {enterprise-linux-shortname} installation . The host must meet the minimum host requirements . Important Virtualization must be enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation. Important Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. 4.2.2. Enabling the Red Hat Enterprise Linux host Repositories To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: Use the pool IDs to attach the subscriptions to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER8 hardware: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER9 hardware: Ensure that all packages currently installed are up to date: Reboot the machine. 4.2.3. Installing Cockpit on Red Hat Enterprise Linux hosts You can install Cockpit for monitoring the host's resources and performing administrative tasks. Procedure Install the dashboard packages: Enable and start the cockpit.socket service: Check if Cockpit is an active service in the firewall: You should see cockpit listed. If it is not, enter the following with root permissions to add cockpit as a service to your firewall: The --permanent option keeps the cockpit service active after rebooting. You can log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . 4.3. Recommended Practices for Configuring Host Networks If your network environment is complex, you may need to configure a host network manually before adding the host to the Red Hat Virtualization Manager. Red Hat recommends the following practices for configuring a host network: Configure the network with Cockpit. Alternatively, you can use nmtui or nmcli . If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster . Use the following naming conventions: VLAN devices: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD VLAN interfaces: physical_device . VLAN_ID (for example, eth0.23 , eth1.128 , enp3s0.50 ) Bond interfaces: bond number (for example, bond0 , bond1 ) VLANs on bond interfaces: bond number . VLAN_ID (for example, bond0.50 , bond1.128 ) Use network bonding . Networking teaming is not supported in Red Hat Virtualization and will cause errors if the host is used to deploy a self-hosted engine or added to the Manager. Use recommended bonding modes: If the ovirtmgmt network is not used by virtual machines, the network may use any supported bonding mode. If the ovirtmgmt network is used by virtual machines, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Red Hat Virtualization's default bonding mode is (Mode 4) Dynamic Link Aggregation . If your switch does not support Link Aggregation Control Protocol (LACP), use (Mode 1) Active-Backup . See Bonding Modes for details. Configure a VLAN on a physical NIC as in the following example (although nmcli is used, you can use any tool): Configure a VLAN on a bond as in the following example (although nmcli is used, you can use any tool): Do not disable firewalld . Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules . Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. 4.4. Adding Standard Hosts to the Red Hat Virtualization Manager Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge. Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. Procedure From the Administration Portal, click Compute Hosts . Click New . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, click the Advanced Parameters button to change the following advanced host settings: Disable automatic firewall configuration. Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click OK . The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the Events section of the Notification Drawer ( ). After a brief delay the host status changes to Up .
[ "subscription-manager repos --enable=rhel-7-server-rhvh-4-rpms", "rpm -Uvh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=\" org_id \" # subscription-manager list --available # subscription-manager attach --pool= pool_id # subscription-manager repos --disable='*' --enable=rhel-7-server-rhvh-4-rpms", "clearpart --all part /boot --fstype xfs --size=1000 --ondisk=sda part pv.01 --size=42000 --grow volgroup HostVG pv.01 --reserved-percent=20 logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended logvol none --vgname=HostVG --name=HostPool --thinpool --size=40000 --grow logvol / --vgname=HostVG --name=root --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=6000 --grow logvol /var --vgname=HostVG --name=var --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=15000 logvol /var/crash --vgname=HostVG --name=var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=10000 logvol /var/log --vgname=HostVG --name=var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=8000 logvol /var/log/audit --vgname=HostVG --name=var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=2000 logvol /home --vgname=HostVG --name=home --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000 logvol /tmp --vgname=HostVG --name=tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000", "mount -o loop /path/to/RHVH-ISO /mnt/rhvh cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp cd /tmp rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv", "cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/", "LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO", "<%# kind: PXELinux name: RHVH PXELinux %> Created for booting new hosts # DEFAULT rhvh LABEL rhvh KERNEL <%= @kernel %> APPEND initrd=<%= @initrd %> inst.ks=<%= foreman_url(\"provision\") %> inst.stage2=<%= @host.params[\"rhvh_image\"] %> intel_iommu=on console=tty0 console=ttyS1,115200n8 ssh_pwauth=1 local_boot_trigger=<%= foreman_url(\"built\") %> IPAPPEND 2", "cp -a /mnt/rhvh/ /var/www/html/rhvh-install curl URL/to/RHVH-ISO /rhvh-install", "liveimg --url= example.com /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img", "autopart --type=thinp", "%post nodectl init %end", "liveimg --url=http:// FQDN /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc America/Phoenix zerombr text reboot %post --erroronfail nodectl init %end", "<%# kind: provision name: RHVH Kickstart default oses: - RHVH %> install liveimg --url=<%= @host.params['rhvh_image'] %>squashfs.img network --bootproto static --ip=<%= @host.ip %> --netmask=<%= @host.subnet.mask %> --gateway=<%= @host.subnet.gateway %> --nameserver=<%= @host.subnet.dns_primary %> --hostname <%= @host.name %> zerombr clearpart --all autopart --type=thinp rootpw --iscrypted <%= root_pass %> installation answers lang en_US.UTF-8 timezone <%= @host.params['time-zone'] || 'UTC' %> keyboard us firewall --service=ssh services --enabled=sshd text reboot %post --log=/root/ks.post.log --erroronfail nodectl init <%= snippet 'subscription_manager_registration' %> <%= snippet 'kickstart_networking_setup' %> /usr/sbin/ntpdate -sub <%= @host.params['ntp_server1'] || '0.fedora.pool.ntp.org' %> /usr/sbin/hwclock --systohc /usr/bin/curl <%= foreman_url('built') %> sync systemctl reboot %end", "APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO inst.ks= URL/to/RHVH-ks .cfg", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= poolid", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms", "subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms --enable=rhel-7-for-power-le-rpms", "subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms --enable=rhel-7-for-power-9-rpms", "yum update", "yum install cockpit-ovirt-dashboard", "systemctl enable cockpit.socket systemctl start cockpit.socket", "firewall-cmd --list-services", "firewall-cmd --permanent --add-service=cockpit", "nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ivp4.gateway 123.123 .0.254", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=100\" ipv4.method disabled ipv6.method ignore nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ivp4.gateway 123.123 .0.254" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/Installing_Hosts_for_RHV_SM_remoteDB_deploy
9.2. Starting and Stopping NFS
9.2. Starting and Stopping NFS To run an NFS server, the portmap service must be running. To verify that portmap is active, type the following command as root: If the portmap service is running, then the nfs service can be started. To start an NFS server, as root type: To stop the server, as root, type: The restart option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server, as root, type: The condrestart ( conditional restart ) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server, as root, type: To reload the NFS server configuration file without restarting the service, as root, type: By default, the nfs service does not start automatically at boot time. To configure the NFS to start up at boot time, use an initscript utility, such as /sbin/chkconfig , /usr/sbin/ntsysv , or the Services Configuration Tool program. Refer to the chapter titled Controlling Access to Services in the System Administrators Guide for more information regarding these tools.
[ "/sbin/service portmap status", "/sbin/service nfs start", "/sbin/service nfs stop", "/sbin/service nfs restart", "/sbin/service nfs condrestart", "/sbin/service nfs reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-nfs-start
Chapter 23. Uninstalling an IdM replica
Chapter 23. Uninstalling an IdM replica As an IdM administrator, you can remove an Identity Management (IdM) replica from the topology. For more information, see Uninstalling an IdM server .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/uninstalling-an-idm-replica_installing-identity-management